Re: [sqlalchemy] session.expire{,_all}_unmodified for dropping objects that are unchanged?!

2018-01-18 Thread Torsten Landschoff
ent_only=True) something like that.   But it's one of those
> things where when someone needs that, I want to see why, because I
> think there's usually another way to handle the situation.
True.

And I want to say very clear that the problem here originated in
application code, not inside SQLAlchemy.
I still would argue that changing expire to drop only persisted data
(which would take a number of releases to become the default) would make
this an instance of a pit of success
<https://blog.codinghorror.com/falling-into-the-pit-of-success/>.

Of course this would need quite some effort. Thanks to your help, I
think, we are good now: I added a safe_expire helper to our code base
using only public SQLAlchemy APIs, thanks to your hints.

Many thanks and greetings, Torsten

-- 
$---+1+2+3+4+5+6+

SCALE GmbH
Niederlassung Dresden
Torsten Landschoff
Pohlandstraße 19
01309 Dresden

Tel: +49-351-312002-10
Fax: +49-351-312002-29

SCALE GmbH
Registergericht und Sitz: Ingolstadt, HRB 6384
Geschäftsführer: Dr.-Ing. Heiner Müllerschön, Dipl.-Math. Ulrich Franz

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] session.expire{,_all}_unmodified for dropping objects that are unchanged?!

2018-01-17 Thread Torsten Landschoff
g changes due to calls
to expire. In hindsight this seems quite obvious, but I did not think
about this when writing that code (because I knew the objects in
question are clean anyway).

Greetings and thanks for any insights,

Torsten

-- 
$---+1+2+3+4+5+6+

SCALE GmbH
Niederlassung Dresden
Torsten Landschoff
Pohlandstraße 19
01309 Dresden

Tel: +49-351-312002-10
Fax: +49-351-312002-29

SCALE GmbH
Registergericht und Sitz: Ingolstadt, HRB 6384
Geschäftsführer: Dr.-Ing. Heiner Müllerschön, Dipl.-Math. Ulrich Franz

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] ClassManager.new_instance() still undocumented, fails for polymorphic mappers

2017-11-17 Thread Torsten Landschoff
Hi Michael et al,


tl;dr: I would like ClassManager.new_instance() to be made official as
discussed in the old discussion and suggest it should initialize the
polymorphic identity.

Context
===

I am not sure if anybody remembers the discussion under
https://groups.google.com/d/msg/sqlalchemy/8kbu8jL0QII/1dWTRp_DVYYJ
where I tried to recreate objects not by unpickling but from my own
serialization code.

With SQLAlchemy I could use MyClass.__new__ to create new instances
which I would then populate. That failed with SQLAlchemy 0.7 and I
thought about using ClassManager.new_instance()

Turns out that I never did. Instead my code currently needlessly
initializes each instance using the __init__ method only to drop that
information in favor of the deserialized data.

What's worse: __init__ now doesn't initialize new instances, instead the
code lazily initializes fields before writing to the database or on
first access via a property - sic!


Needless to say I would like to get rid of it. Trying to do so already
uncovered a bug in the software, where creation times are lost during
deserialization ending up as the unix epoch.


Example code


Basically I would like to initialize instances like this:

-
mapper = class_mapper(Circle)
circle = mapper.class_manager.new_instance()
circle.radius = 42
session.add(circle)
session.commit()
-

This fails to initialize the target_type field of Shape (which Circle
inherits from). Therefore, new_instance works fine for concrete mapped
classes, unless polymorphism is used.


Running the attached example gives this output:

> $ pipenv install
> [...]
> $ pipenv run python example.py 
> 
> This works (as expected)
> 
> This fails (unexpectedly)
> -> Error is IntegrityError('(sqlite3.IntegrityError) NOT NULL constraint 
> failed: shape.target_type',)
> 
> This works (but is a bit ugly)


Would you please consider documenting this as a supported use case and
potentially extend new_instance to set the polymorphic_identity?


Thanks a bunch, especially for your hard work in creating SQLAlchemy!

Greetings, Torsten


-- 
$---+1+2+3+4+5+6+

SCALE GmbH
Niederlassung Dresden
Torsten Landschoff
Pohlandstraße 19
01309 Dresden

Tel: +49-351-312002-10
Fax: +49-351-312002-29

SCALE GmbH
Registergericht und Sitz: Ingolstadt, HRB 6384
Geschäftsführer: Dr.-Ing. Heiner Müllerschön, Dipl.-Math. Ulrich Franz

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import Column, ForeignKey, Integer, String, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import class_mapper, sessionmaker

# Interesting stuff starts at "Evaluate ClassManager.create_instance"

Base = declarative_base()


class Company(Base):
__tablename__ = "company"

id = Column(Integer, primary_key=True)
ticker = Column(String)
name = Column(String)


class Shape(Base):
__tablename__ = "shape"

id = Column(Integer, primary_key=True)
target_type = Column(String(50), nullable=False)

__mapper_args__ = {"polymorphic_on": target_type}


class Circle(Shape):
__tablename__ = "circle"

id = Column(Integer, ForeignKey(Shape.id), primary_key=True)
radius = Column(Integer)

__mapper_args__ = {"polymorphic_identity": "circle"}


engine = create_engine("sqlite:///", echo=None)
Base.metadata.create_all(engine)

session = sessionmaker(engine)()


# Evaluate ClassManager.create_instance
#---

print("\nThis works (as expected)")

mapper = class_mapper(Company)
company = mapper.class_manager.new_instance()
company.name = "Red Hat, Inc."
company.ticker = "RHAT"
session.add(company)
session.commit()

# SQL: INSERT INTO company (ticker, name) VALUES (?, ?)
# VARS: ('RHAT', 'Red Hat, Inc.')


print("\nThis fails (unexpectedly)")

try:
mapper = class_mapper(Circle)
circle = mapper.class_manager.new_instance()
circle.radius = 42
session.add(circle)
session.commit()
# SQL: INSERT INTO shape (target_type) VALUES (?)
# VARS: (None,)
except Exception as e:
print("-> Error is {0!r}".format(e))
session.rollback()


print(&quo

Re: [sqlalchemy] Re: Abusing a string column as list of foreign keys (own relationship class, based on DependencyProcessor?)

2016-09-23 Thread Torsten Landschoff

Hi Mike,

thanks for the reply.

On Thursday, August 25, 2016 at 3:57:30 PM UTC+2, Mike Bayer wrote:
>
>
>
> well dependencyprocessor is about doing the objects in the right order, 
>

clear.
 

> and about invoking a command to "sync" important attributes from one 
> side to the other, which means copying the PK of one object to the FK of 
> another.  I guess in this case we'd be "copying" the PK of related 
> objects into the local object's comma-separated list of keys.   From 
>

exactly. The problem here is that these objects are not yet in the database 
so I can't copy the primary key because I don't know about it.
 

> that perspective this is not much of a "DependencyProcessor" problem 
> because you have this local attribute that just contains a view 
> (comma-separated-list of keys) of a collection attribute on the same 
> object.   That is, I don't see much of a "topological" problem here if 
>

How is this different from updating a column that contains the foreign key 
referring to another object (which is taken care of by a 
DependencyProcessor afaict)?
 

> I'm understanding correctly.   Copy-on-write is done by an attribute 
> event, object is written therefore we set a flag, in before_flush() we 
> copy all the objects that need to be copied-on-write, then I think you 
> have the after_flush there to update the parent object.you can set 
> up Bundle.members there also. 
>

Actually the objects (which are mapped) are already copied beforehand so 
when flush is called I have a bunch of new objects which don't have any 
primary key yet.


Anyway, I seem unable to express myself so please do not spend more time on 
this. The current solution is working fine albeit slightly inefficient.

Thanks and Greetings, Torsten

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Abusing a string column as list of foreign keys (own relationship class, based on DependencyProcessor?)

2016-08-25 Thread Torsten Landschoff
Hi Mike,

On Wednesday, August 24, 2016 at 1:27:42 PM UTC+2, Mike Bayer wrote:
 

> if you've already solved the problem I'd rather not get into it :) 
>
>
I am still curious about the inner workins of the DependencyProcessors of 
sqlalchemy and how it would be possible to implement a custom relationship 
property.
Not that I would make use of it currently. :-)

Greetings, Torsten

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Abusing a string column as list of foreign keys (own relationship class, based on DependencyProcessor?)

2016-08-24 Thread Torsten Landschoff
Hi all,

On Wednesday, August 24, 2016 at 3:17:55 AM UTC+2, Torsten Landschoff wrote:
>
> I am currently pulling my hair out because I have a solution that I think 
> should work on the database side (albeit I don't like it), but I can't 
> figure out how to do this with sqlalchemy.
> My current goal is to manage (long-lived) locks on copy-on-write 
> hierarchical data that is stored in Oracle RDBMS.
>
>
I case somebody was pondering about this, I think by sleeping over it and 
discussing it with a colleague I found a more "normal" solution.
So don't spend too much time on this :-)

Greetings, Torsten
 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Custom Relationship property for Column with csv of integer keys

2016-08-23 Thread Torsten Landschoff

Hi *,

this post is a bit short on data because I just wrote a long posting and it 
seems to have vanished when I hit the "POST" button. *sigh*

I am trying to do something (admittedly crazy) like this, but without the 
after_flush hook - I'd rather like to tell the ORM that member_ids is 
computed from the members list.
Problem is that entries to members are new at the time of flush, so ids are 
not available.

Any hints how to create a new csv_relationship property to track the ids of 
referred objects in a column of comma separated values?!

from sqlalchemy import create_engine, Column, Integer, String, event
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///:memory:', echo=True)
Base = declarative_base()

class Person(Base):
__tablename__ = "persons"
id = Column(Integer, primary_key=True)
name = Column(String)


class Bundle(Base):
__tablename__ = "bundle"
id = Column(Integer, primary_key=True)
member_ids = Column(String, unique=True)

def __init__(self, members):
self.members = members


Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine)

@event.listens_for(Session, "after_flush")
def receive_after_flush(session, flush_context):
items = set(session.dirty) | set(session.new)
for item in items:
if isinstance(item, Bundle):
session.execute(Bundle.__table__
.update()
.values(member_ids=",".join(str(x.id) for x in item.members
))
.where(Bundle.id == item.id))


session = Session()
persons = [Person(name=name) for name in ["Martin", "Michael", "Fabian"]]
for person in persons:
session.add(person)
session.commit()

bundle = Bundle(persons[1:])
session.add(bundle)
session.commit()


Thanks and Greetings, Torsten

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Abusing a string column as list of foreign keys (own relationship class, based on DependencyProcessor?)

2016-08-23 Thread Torsten Landschoff
Hello everybody,

I am currently pulling my hair out because I have a solution that I think 
should work on the database side (albeit I don't like it), but I can't 
figure out how to do this with sqlalchemy.
My current goal is to manage (long-lived) locks on copy-on-write 
hierarchical data that is stored in Oracle RDBMS.

Of course the code must guaranteee uniqueness. Funny thing is that I can 
have multiple locks on the same object as it can be shared due to the 
shallow copy nature of the application.

The weird schema is something like this:

CREATE TABLE item_locks (
lock_id VARCHAR(36) NOT NULL, 
context_id INTEGER NOT NULL, 
target_path VARCHAR(2048) NOT NULL, 
target_id INTEGER, 
expiration_time DATETIME, 
owner_id INTEGER NOT NULL, 
PRIMARY KEY (lock_id), 
CONSTRAINT item_locks_unique UNIQUE (target_path), 
FOREIGN KEY(id) REFERENCES taggable (id) ON DELETE CASCADE, 
FOREIGN KEY(target_id) REFERENCES taggable (id) ON DELETE CASCADE, 
FOREIGN KEY(owner_id) REFERENCES users (id) DEFERRABLE INITIALLY 
deferred
);
CREATE INDEX ix_taggable_locks_target_id ON taggable_locks (target_id);

Yes I know.

My code is basically working fine but I ran into one limitation: 
target_path is actually a comma separated list of integers like "5,7,8" 
which identify the locked object (id 8, same as target_id) as seen from the 
context object (id 5, same as context_id).

Given the copy on write nature I have to propagate the locks to the actual 
writable object though. context_id is always writable but the children are 
reused across different contexts. A COW operation will create a new 
instance that is also writable, so the lock may move from "5,7,8" to "9, 8" 
with id 9 being the new copy of the object with id 7. (For completeness: 
when the object is actually written by the lock holder, the lock would 
transform to something like "10", with context_id = target_id = 10 as well).

What I forgot though: Usually the copy on write created objects are created 
within the ORM session which means that I actually got target_path = 
"None,8" when actually using the code in the intended setting. That's one 
thing I did not have in mind... :-(

I can hack around this by code like this:

from sqlalchemy import create_engine, Column, Integer, String, event
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///:memory:', echo=True)
Base = declarative_base()

class Person(Base):
__tablename__ = "persons"
id = Column(Integer, primary_key=True)
name = Column(String)


class Bundle(Base):
__tablename__ = "bundle"
id = Column(Integer, primary_key=True)
member_ids = Column(String, unique=True)

def __init__(self, members):
self.members = members


Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine)

@event.listens_for(Session, "after_flush")
def receive_after_flush(session, flush_context):
items = set(session.dirty) | set(session.new)
for item in items:
if isinstance(item, Bundle):
session.execute(Bundle.__table__
.update()
.values(member_ids=",".join(str(x.id) for x in item.members
))
.where(Bundle.id == item.id))


session = Session()
persons = [Person(name=name) for name in ["Martin", "Michael", "Fabian"]]
for person in persons:
session.add(person)
session.commit()

bundle = Bundle(persons[1:])
session.add(bundle)
session.commit()

This comes with its share of new problems (like updating bundle.members 
after the initial commit does not yield an update).

This frustrates me because SQLAlchemy internally has all the knowledge for 
topological sorting of operations, but I can not figure out how to use the 
goodness to implement the schema that I would like to have.
I have a faint hope that Mike will come up with a recipe how I can build my 
own relationship like property so that the code can just be changed to

class Bundle(Base):
# ...

members = csv_relationship(Person, "id")


BTW: For extra fun the lock propagation work on the target_path column in 
before_flush.

Any hints how to mount this with SQLAlchemy greatly appreciated!

Greetings, Torsten

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Getting the identity key of a mapped instance w/o SQL query

2013-08-31 Thread Torsten Landschoff

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Michael,

On 08/31/2013 12:34 AM, Michael Bayer wrote:
 well those are old functions and they should document that what you usually 
 want is just
inspect(obj).key, if you have an object already.   I added
http://www.sqlalchemy.org/trac/ticket/2816 for that.


 just to verify, state.key does what you want, right?
Correct. I was just wondering if the right name for it is key or
InstanceState.identity_key, the latter is documented.

Thanks and greetings,

Torsten

- -- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-312002-10
Fax: +49-(0)351-312002-29

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

DYNAmore Gesellschaft für FEM Ingenieurdienstleistungen mbH
Registration court: Stuttgart, HRB 733694
Managing director: Prof. Dr. Karl Schweizerhof, Dipl.-Math. Ulrich Franz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIiWw4ACgkQkSfe3VK5iP2tEACgmKRYpggxhBdgMAOw9eBWm5gn
CHsAoOXtCBzR8rirzx1sIWhEyVgsoaON
=pdfK
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Getting the identity key of a mapped instance w/o SQL query

2013-08-30 Thread Torsten Landschoff
Hi *,

I am trying to cache SQLAlchemy queries in memory for a rich client
application. To invalidate the cache for changes seen in the database, I
am trying to drop in-memory instances that have been changed or deleted.

This requires comparing the identity of the deleted objects with
in-memory objects. I tried using identity_key for this and failed,
because it tries to reload from the database and I expire the instances
when I am told they had some changes.
The attached IPython notebook shows the behaviour. Short summary:


Reloads expired state (potential ObjectDeletedError)

identity_key(instance=instance)
mapper.identity_key_from_instance(instance)
mapper.primary_key_from_instance(instance)


Uses old primary key (no reload, no ObjectDeletedError)

object_state(user).identity_key
object_state(user).identity
object_state(user).key

The main reason why I care is that identity_key may generate database
queries which kill any performance improvement of my query cache.
I think this should be documented in SQLAlchemy, I did not expect those
functions to ever raise an exception.

Please consider extending the documentation via my attached patch (also
adds a unit test for the ObjectDeletedError).

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-312002-10
Fax: +49-(0)351-312002-29

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

DYNAmore Gesellschaft für FEM Ingenieurdienstleistungen mbH
Registration court: Stuttgart, HRB 733694
Managing director: Prof. Dr. Karl Schweizerhof, Dipl.-Math. Ulrich Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.
{
 metadata: {
  name: 
 },
 nbformat: 3,
 nbformat_minor: 0,
 worksheets: [
  {
   cells: [
{
 cell_type: code,
 collapsed: false,
 input: [
  import sqlalchemy\n,
  sqlalchemy.__version__
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: slide
  }
 },
 outputs: [
  {
   output_type: pyout,
   prompt_number: 1,
   text: [
'0.9.0'
   ]
  }
 ],
 prompt_number: 1
},
{
 cell_type: code,
 collapsed: false,
 input: [
  from sqlalchemy import Column, Integer, String, create_engine\n,
  from sqlalchemy.orm import sessionmaker\n,
  from sqlalchemy.orm.util import identity_key, object_state, 
object_mapper, class_mapper\n,
  from sqlalchemy.ext.declarative import declarative_base\n,
  \n,
  Base = declarative_base()\n,
  \n,
  class User(Base):\n,
  __tablename__ = 'users'\n,
  \n,
  id = Column(Integer, primary_key=True)\n,
  name = Column(String(50))\n,
  \n,
  engine = create_engine(\sqlite:///\)\n,
  Base.metadata.create_all(engine)\n,
  \n,
  Session = sessionmaker(engine)\n,
  session = Session()\n,
  \n,
  user = User(name=\Joe\)\n,
  session.add(user)\n,
  session.commit()
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: slide
  }
 },
 outputs: [],
 prompt_number: 2
},
{
 cell_type: code,
 collapsed: false,
 input: [
  identity_key(instance=user)
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: subslide
  }
 },
 outputs: [
  {
   output_type: pyout,
   prompt_number: 3,
   text: [
(__main__.User, (1,))
   ]
  }
 ],
 prompt_number: 3
},
{
 cell_type: code,
 collapsed: false,
 input: [
  object_state(user).identity_key
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: subslide
  }
 },
 outputs: [
  {
   output_type: pyout,
   prompt_number: 4,
   text: [
(__main__.User, (1,))
   ]
  }
 ],
 prompt_number: 4
},
{
 cell_type: code,
 collapsed: false,
 input: [
  object_state(user).identity
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: subslide
  }
 },
 outputs: [
  {
   output_type: pyout,
   prompt_number: 5,
   text: [
(1,)
   ]
  }
 ],
 prompt_number: 5
},
{
 cell_type: code,
 collapsed: false,
 input: [
  object_state(user).key
 ],
 language: python,
 metadata: {
  slideshow: {
   slide_type: fragment
  }
 },
 outputs: [
  {
   output_type: pyout,
   prompt_number: 6,
   text: [
(__main__.User, (1

Re: [sqlalchemy] SQLAlchemy generates multiple connections

2012-11-28 Thread Torsten Landschoff
On 11/27/2012 04:33 PM, Michael Bayer wrote:

 On Nov 27, 2012, at 5:53 AM, Daniel Rentsch wrote:

 We are trying to implement a locking mechanism which relies on the
 incoming/used connections returned/used by SQLAlchemy. The problem we
 ran into is that SQLAlchemy uses a new database connection after
 calling expire_all() if we use the autocommit=True functionality. We
 expected that only one connection is used in one session.

 More precisely: 

 *IF*  you use *autocommit=True*
 *   IF * you call call *expire_all* 
 *   AND AFTER THIS* generate a query with more than one affected
 tables via *filter_by*
 *   THEN* SQLAlchemy will use multiple connections


 A Session using autocommit=True will pull a new connection from the
 connection pool for each individual execute operation, or each
 flush().   If you want only one connection used per session, you
 either need to stick with the recommended autocommit=False (the
 connection will be per-transaction, actually), or bind the Session to
 a single connection directly (sess = Session(bind=some_connection)).
Let me get that straight: You are saying that a single Session can have
*multiple concurrent* connections if configured with autocommit=True?

I tried to analyze this further by digging stack traces. This diagram
tries to explain what's going on:



Basically, the original Query for a user with a given country spain
(which is expired) causes a connection checkout conn1 for the original
query.
However, before the first select on that connection is run, another
connection is checked out to refresh the Country instance spain.

Running the example with a bit of extra logging also supports that claim:

2012-11-28 12:27:54,219 sqlalchemy.pool.NullPool DEBUGCreated new
connection sqlite3.Connection object at 0x16c8030
2012-11-28 12:27:54,219 sqlalchemy.pool.NullPool DEBUGConnection
sqlite3.Connection object at 0x16c8030 checked out from pool

--- Inner query to refresh spain ---
2012-11-28 12:27:54,220 sqlalchemy.pool.NullPool DEBUGCreated new
connection sqlite3.Connection object at 0x16d8858
2012-11-28 12:27:54,221 sqlalchemy.pool.NullPool DEBUGConnection
sqlite3.Connection object at 0x16d8858 checked out from pool
2012-11-28 12:27:54,221 sqlalchemy.engine.base.Engine INFO SELECT
tf_country.id AS tf_country_id
FROM tf_country
WHERE tf_country.id = ?
2012-11-28 12:27:54,222 sqlalchemy.engine.base.Engine INFO (2,)
2012-11-28 12:27:54,222 sqlalchemy.engine.base.Engine DEBUGCol
('tf_country_id',)
2012-11-28 12:27:54,223 sqlalchemy.engine.base.Engine DEBUGRow (2,)
2012-11-28 12:27:54,223 sqlalchemy.pool.NullPool DEBUGConnection
sqlite3.Connection object at 0x16d8858 being returned to pool
2012-11-28 12:27:54,223 sqlalchemy.pool.NullPool DEBUGClosing
connection sqlite3.Connection object at 0x16d8858
--- Inner query ---

2012-11-28 12:27:54,224 sqlalchemy.engine.base.Engine INFO SELECT
tf_user.id AS tf_user_id, tf_user.country_id AS tf_user_country_id
FROM tf_user
WHERE ? = tf_user.country_id
2012-11-28 12:27:54,224 sqlalchemy.engine.base.Engine INFO (2,)
2012-11-28 12:27:54,224 sqlalchemy.engine.base.Engine DEBUGCol
('tf_user_id', 'tf_user_country_id')
2012-11-28 12:27:54,225 sqlalchemy.engine.base.Engine DEBUGRow (1, 2)
2012-11-28 12:27:54,225 sqlalchemy.pool.NullPool DEBUGConnection
sqlite3.Connection object at 0x16c8030 being returned to pool
2012-11-28 12:27:54,225 sqlalchemy.pool.NullPool DEBUGClosing
connection sqlite3.Connection object at 0x16c8030


I would think that this is not the desired behaviour, especially given
an earlier message from you on this list:

On 11/09/2012 11:36 PM, Michael Bayer wrote:

 On Nov 8, 2012, at 5:01 PM, Torsten Landschoff wrote:
 My first tests with the SQLAlchemy core where promising, but when using
 the ORM I get a bunch of deadlocks where it seems like the session opens
 two connections A and B where A locks B out.
 The Session never does this, assuming just one Engine associated with it.  It 
 acquires one Connection from the Engine, holds onto it and uses just that 
 connection, until commit() at which point the connection is released to the 
 pool.   
Maybe this is only the case for sessions with autocommit=False, but you
did not qualify that statement.

Can you clarify please?

Thanks a bunch,

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group

Re: [sqlalchemy] SQLAlchemy generates multiple connections

2012-11-28 Thread Torsten Landschoff
Hi Michael,

first, thank you for the extensive explanation, it is much appreciated.

On 11/28/2012 04:59 PM, Michael Bayer wrote:

 On Nov 28, 2012, at 6:34 AM, Torsten Landschoff wrote:

 Let me get that straight: You are saying that a single Session can
 have *multiple concurrent* connections if configured with
 autocommit=True?

 First, I'm assuming that you're familiar with the recommendations
 regarding the autocommit=True option, which is that *it is a legacy
 option not for general use*
 (http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html#autocommit-mode).
  The are only two reasons this flag should ever be used:

 1. you're running a SQLA app that was built on some very ancient
 version, that is, 0.3 or 0.4, and can't change it.  In this scenario,
 you're actually executing queries without the Session being aware of
 the transaction scope.

 2. you're a framework integrator, and you're using autocommit=True so
 that you can call begin() on the Session explicitly. In this case,
 the Session is *not* executing queries outside of transactions,
 because the integration makes sure that begin() is called before the
 Session is used to communicate with a database.
We are using it for a third reason:

If highly concurrent reads are desired against the SQLite database,
it is advised that the autoflush feature be disabled, and
potentially even that autocommit be re-enabled, which has the effect
of each SQL statement and flush committing changes immediately.

http://docs.sqlalchemy.org/en/rel_0_8/dialects/sqlite.html?highlight=sqlite#database-locking-behavior-concurrency

 I've reproduced a test case from this, in the future just sending that
 along would be helpful because when you said subquery, I assumed you
 were using subqueryloading and went through all the trouble to test
 that. 
Sorry, I thought it was clear that I was referring to the test program
written by Daniel which started this thread.
 In this case, the Query procures the Connection from the Session, then
 goes to execute the query, but in the case of a comparison like
 filter_by(some_many_to_one=someobject), a special mechanism is used
 that fetches the id from someobject as late as possible, which is
 required for some relatively common expiration cases I'd have to dig
 up to illustrate.   So in that one case, probably no others besides
 yield_per(), you get a refresh operation of the mapped object within
 the scope of the select() construct being compiled which will check
 out another connection if the Session is being run within autocommit.
Okay.
 The Session never does this, assuming just one Engine associated with it.  
 It acquires one Connection from the Engine, holds onto it and uses just 
 that connection, until commit() at which point the connection is released 
 to the pool.   
 Above, I was trying to simplify matters assuming the recommended
 patterns were used.  Here's the full sentence:

 The Session never does this, assuming just one Engine associated with
 it, and assuming it is used in its recommended configuration of
 autocommit=False.
Thanks for the clarification. So basically, we hit a dead end by using
autocommit=True. I guess we will finally convert to autocommit=False and
see if we can bypass the locking issues inherent to SQLite.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy generates multiple connections

2012-11-28 Thread Torsten Landschoff
On 11/28/2012 06:32 PM, Michael Bayer wrote:

 We are using it for a third reason:

 If highly concurrent reads are desired against the SQLite
 database, it is advised that the autoflush feature be disabled,
 and potentially even that autocommit be re-enabled, which has the
 effect of each SQL statement and flush committing changes
 immediately.


 OK, I'd set this on the SQLite connection itself using the
 autocommit
 option: http://docs.python.org/2/library/sqlite3.html#controlling-transactions
   basically setting isolation_level=None.  You can do this in
 connect_args to create_engine() or in a pool on connect event.   I'd
 keep the Session in autocommit=False, and still have the app run
 using commit as a signal that we're done with the work we're
 doing.   that doc should be improved.
AFAICT this will mean we have absolutely no transactional behaviour but
each flush will automatically make it to the database.
Unfortunately some of our operations require the transaction boundary to
be able to roll back if there is for example a permission issue.

Thanks for the pointer though - we will first switch to autocommit=False
and adjust our code.

Greetings,

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] (SQLite) Outside auto-locking based on SQLAlchemy Events

2012-11-12 Thread Torsten Landschoff
Hi Michael,

On 11/09/2012 11:36 PM, Michael Bayer wrote:
 On Nov 8, 2012, at 5:01 PM, Torsten Landschoff wrote:
 My first tests with the SQLAlchemy core where promising, but when using
 the ORM I get a bunch of deadlocks where it seems like the session opens
 two connections A and B where A locks B out.
 The Session never does this, assuming just one Engine associated with it.  It 
 acquires one Connection from the Engine, holds onto it and uses just that 
 connection, until commit() at which point the connection is released to the 
 pool.   
Okay, thanks, maybe the error was elsewhere then.
 SQLite supports a SERIALIZABLE mode of isolation, in conjunction with a 
 workaround for a pysqlite bug 
 (http://docs.sqlalchemy.org/en/rel_0_7/dialects/sqlite.html#serializable-transaction-isolation)
  which might be what you're looking for, though I generally try to steer 
 users away from any usage of SQLite that depends on high concurrency (see 
 High Concurrency at http://sqlite.org/whentouse.html).
I do not consider an application that downloads new records once per
hour concurrently to the GUI high concurrency. And that background
process is not really a problem either, as long as I just lock the
database all the time. This makes the gui freeze for a couple of minutes
though.
Therefore I am looking for a solution that will make background and main
thread cooperate wrt. database access.

BTW: The main issue is not concurrency in itself. SQLite just uses
filesystem locking which are basically spin locks. So as long as the
background thread updates the database it has a high probability to
reacquire the lock after each transaction while the GUI thread will fail
to hit the slots where the db is not locked.

 To diagnose this code, you'd need to make use of the tools available - which 
 includes connection pool logging, engine logging, and possibly usage of 
 custom pools like sqlalchemy.pool.AssertionPool which ensures that only one 
 connection is used at any time.
Thanks for the pointer to AssertionPool. I already use the others.

BTW, I found one offender that breaks running database upgrades with my
locking schemes:

from sqlalchemy import *
from sqlalchemy.pool import *

engine = create_engine(sqlite:home/torsten/some.db,
poolclass=AssertionPool)
conn = engine.connect()
metadata = MetaData(conn, reflect=True)

This results in the following backtrace here:

$ python demo.py
Traceback (most recent call last):
  File demo.py, line 6, in module
metadata = MetaData(conn, reflect=True)
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/schema.py,
line 2363, in __init__
self.reflect()
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/schema.py,
line 2497, in reflect
connection=conn))
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
line 2504, in table_names
conn = self.contextual_connect()
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
line 2490, in contextual_connect
self.pool.connect(),
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py,
line 224, in connect
return _ConnectionFairy(self).checkout()
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py,
line 387, in __init__
rec = self._connection_record = pool._do_get()
  File
/opt/dynasdk/loco2-precise/lib/python2.7/site-packages/sqlalchemy/pool.py,
line 911, in _do_get
raise AssertionError(connection is already checked out + suffix)
AssertionError: connection is already checked out at:
  File demo.py, line 5, in module
conn = engine.connect()

I would have expected it to reflect using the connection passed to the
MetaData constructor.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] (SQLite) Outside auto-locking based on SQLAlchemy Events

2012-11-08 Thread Torsten Landschoff
Hi Michael et al,

I am banging my ahead into a (so it seems) trivial problem for days now.
Basically, I finally need to lock my SQLite database because multiple
threads are writing to it. This does not happen to often, but I have a
single thread that is dumping thousands of records into the database
while the user interface might concurrently do simple updates like
update component set short_description='Foo' where id=10.

Originally the sync thread was slow enough that the SQLite side of
locking did work. Now that it is optimized a bit it does not keep up
anymore. This leads to all other requests going the Database is locked
way of failing.

99% of the time all db access is sequential so I figured it would
suffice to lock the database before I do anything to it and unlock the
database when done. I tried to instrument all code but with the unit of
work pattern it is hard to find out where the lock was forgotten...

So my current approach is to use the before_execute event to open a lock
and when a connection is returned to the pool I unlock it. I attached
the code of that mechanism.

My first tests with the SQLAlchemy core where promising, but when using
the ORM I get a bunch of deadlocks where it seems like the session opens
two connections A and B where A locks B out. I can provide more data and
example code, but I would first like to know if my approach is
completely bogus in your eyes.

If it is I am open to better ideas.

BTW: Originally I captured before_flush and commit/rollback session
events, but this still created locking errors due to read requests going
unchecked.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

# -*- coding: utf-8 -*-

import collections
import threading
import traceback
from sqlalchemy import event
from sqlalchemy.engine import Connection


class DatabaseAutoLocker(object):

Verwaltet eine Zugriffssperre auf eine Datenbank, die den exklusiven Zugriff
durch einen einzelnen Thread sicherstellt. Die gedachte Verwendung ist, eine
Instanz dieser Klasse nach dem Erstellen der Engine anzulegen::

   engine = create_engine(sqlite:///test.db)
   DatabaseAutoLocker(engine)

Die DatabaseAutoLocker-Instanz hängt sich dann über Events an die engine und
lebt, solange diese engine existiert. Daher braucht man keine Referenz auf
den Locker aufzubewahren.


def __init__(self, engine, timeout=None):

Erstellt einen Autolocker für die in *engine* gegebene Datenbank-Engine.

:param engine: Datenbank-Engine als Instanz von sqlalchemy.engine

:param timeout: Zeit in Sekunden, die ein Client auf die Datenbankverbindung
wartet, bevor eine Exception geworfen wird. None (default) deaktiviert
den Timeout und der Client wartet bis zuletzt.

self.timeout = timeout

#: Schützt den Zugriff auf interne Daten
self._mutex = threading.RLock()

#: Enthält die Liste der noch auf die Datenbanksperre wartenden Clienten
self._pending_requests = collections.deque()

#: Aktuell aktive Verbindung (diese hat die Datenbank für sich gesperrt). None,
#: wenn keine Verbindung die Sperre hat.
self._active_dbapi_connection = None

#: Wenn aktiviert liefert dies den Traceback des Aufrufers, der die Datenbank
#: gegenwärtig gesperrt hält, sonst immer None.
self._active_locker_traceback = None

event.listen(engine, before_execute, self.__connection_event_before_execute)
event.listen(engine, checkin, self.__pool_event_checkin)

def __connection_event_before_execute(self, conn, clauseelement, multiparams, params):

Registriert die erste Ausführung eines Kommandos über eine Datenbankverbindung.
Hier muss die Datenbank für andere Verbindung gesperrt werden.

dbapi_connection = _get_dbapi_connection(conn)
request = None
with self._mutex:
if self._active_dbapi_connection is dbapi_connection:
# Nichts zu tun, die Verbindung ist schon im Besitz der Sperre.
return

locker_traceback = None
if self.timeout is not None:
locker_traceback = traceback.format_stack()

if self._active_dbapi_connection is None

Re: [sqlalchemy] Automatically merge a specific model/object

2012-04-25 Thread Torsten Landschoff

Hi all,

sorry for pulling out this old thread again. I continuously run into this.

Am 08/09/2010 11:20 PM, schrieb Michael Bayer:
If I'm understanding correctly, that's not as much about merging as it 
is for the unique object recipe, which is this: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject which 
assumes the usage of a scoped_session. Since you're looking for the 
callable EmailAddress to know about a session and you don't want to 
pass it in, there's not really any other option. 
I like the unique object recipe. However, it goes not far enough for my 
use case.
The reason is that it can take seconds for my transactions to complete 
(waiting for remote data) and inbetween another database change can add 
the object.


Before I resort to trial and error: Is it possible to use 
session.merge() in the before_flush event to combine the in-session data 
with data already in the database?


The downside is that this requires the transaction isolation level to 
allow phantom reads, which I do not really like.


If it is of any interest, I would like to have this for an external BLOB 
storage. I use an SQL database to organize access to mass data in a git 
like manner. If a new file is imported, the goal is to reuse the content 
data if any existing blob has the same hash.


In filesystem this is like

cp new_file.wmv vault/temp/tmpXYZ.dat
hash=`sha1sum vault/temp/tmpXYZ.dat | cut -f 1 -d  `
if [ -e vault/$hash ]; then
# File exists, remove temporary file. Leaves old file untouched 
wrt. mtime so

# rsync based backup will be fast
rm vault/temp/tmpXYZ.dat
else
# Add new file. With two concurrent imports, either can end up 
being the final file.

# Assuming a good hash, they have the same content anyway.
mv vault/temp/tmpXYZ.dat vault/$hash
fi


Greetings, Torsten

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] ClassManager.new_instance does not longer initialize polymorphic column

2012-02-01 Thread Torsten Landschoff
Hi Michael,

On Tue, 2012-01-31 at 10:26 -0500, Michael Bayer wrote:

 If you're using new_instance() to create the instance, then you are
 deserializing data from somewhere, why isn't the discriminator value,
 which is after all one of the column values in the table, not present
 in this deserialization ?

Because I considered the value of the type column an implementation
detail and it was managed automatically by SQLAlchemy so far. Further,
the serialized format is not that close to the database format (it is
actually XML).

 Right now the event in question is emitted only from the __init__()
 method of your object, which is obviously what you don't call when
 deserializing.The discriminator value is now considered to be just
 another instance variable that you can change freely - a default for
 it is configured from __init__().

I actually think about diverging from my original approach. In case
anybody cares: So far, I am using __init__ to initialize some required
columns from context (for example, the creator column is initialized
from the current user). During deserialization this should not happen as
the current user will probably be different from the creator of the
received instance.

Probably it will be more pythonic by changing the implementation that it
is possible to create an empty instance for every class and catch
missing information at the database level.

 Anyway the event here is the init event which you can emit from 
 classmanager:
 
 manager.dispatch.init(state, args, kw)
 
 where args, kw are what would normally be sent to __init__, but can just 
 be blank here.
 
 Invoking the events is not entirely public API, though. 

Thanks for the information. Therefore, I will probably go to set the
discriminator column manually while deserialization can not go via
__init__.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Stuttgart, HRB 733694
Managing director: Prof. Dr. Karl Schweizerhof, Dipl.-Math. Ulrich Franz


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] ClassManager.new_instance does not longer initialize polymorphic column

2012-01-31 Thread Torsten Landschoff
Hello again,

for a while I want to upgrade our application to use SQLAlchemy 0.7.x.
However I am running into changed behaviour wrt.
ClassManager.new_instance. The behaviour I rely on was discussed here:

http://article.gmane.org/gmane.comp.python.sqlalchemy.user/28746

Basically, I want to create an empty instance of a mapped class and
unserialize my data into it. I am not using Pickle because of security
and compatibility fears.

The attached sample program worked fine with SQLAlchemy 0.6.8:

$ python new_instance.py 
__main__.Engineer object at 0x10774d0

However, with SQLAlchemy 0.7.5 I get

$ python new_instance.py 
Traceback (most recent call last):
  File new_instance.py, line 27, in module
session.commit()
  [...]
  File 
/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.5dev-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py,
 line 330, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (IntegrityError) people.type may not be NULL 
u'INSERT INTO people (type) VALUES (?)' (None,)

I'd really like to use 0.7.5 but I don't know how to replace
new_instance. I attached the sample program and a suggested patch for
the documentation: In the declarative example, the type column is
nullable.

I found out the hard way that it should rather not be. Having a
non-nullable type column at least unconvers the problem when writing to
the database, not when reading the corrupted database :-)

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Stuttgart, HRB 733694
Managing director: Prof. Dr. Karl Schweizerhof, Dipl.-Math. Ulrich Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm.attributes import manager_of_class
from sqlalchemy.ext.declarative import declarative_base


Base = declarative_base()

class Person(Base):
__tablename__ = 'people'
id = Column(Integer, primary_key=True)
discriminator = Column('type', String(50), nullable=False)
__mapper_args__ = {'polymorphic_on': discriminator}

class Engineer(Person):
__tablename__ = 'engineers'
__mapper_args__ = {'polymorphic_identity': 'engineer'}
id = Column(Integer, ForeignKey('people.id'), primary_key=True)
primary_language = Column(String(50))

engine = create_engine(sqlite:///)
Base.metadata.create_all(engine)
session = sessionmaker(engine)()

e = manager_of_class(Engineer).new_instance()
session.add(e)
session.commit()

print session.query(Engineer).first()
diff -r ebe9514a69a4 lib/sqlalchemy/ext/declarative.py
--- a/lib/sqlalchemy/ext/declarative.py	Sat Jan 28 17:43:13 2012 -0500
+++ b/lib/sqlalchemy/ext/declarative.py	Tue Jan 31 12:50:35 2012 +0100
@@ -389,7 +389,7 @@
 class Person(Base):
 __tablename__ = 'people'
 id = Column(Integer, primary_key=True)
-discriminator = Column('type', String(50))
+discriminator = Column('type', String(50), nullable=False)
 __mapper_args__ = {'polymorphic_on': discriminator}
 
 class Engineer(Person):


Re: [sqlalchemy] ClassManager.new_instance does not longer initialize polymorphic column

2012-01-31 Thread Torsten Landschoff
On Tue, 2012-01-31 at 12:52 +0100, Torsten Landschoff wrote:

 However, with SQLAlchemy 0.7.5 I get
 
 $ python new_instance.py 
 Traceback (most recent call last):
   File new_instance.py, line 27, in module
 session.commit()
   [...]
   File 
 /usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.7.5dev-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py,
  line 330, in do_execute
 cursor.execute(statement, parameters)
 sqlalchemy.exc.IntegrityError: (IntegrityError) people.type may not be NULL 
 u'INSERT INTO people (type) VALUES (?)' (None,)

I found a workaround:

--- a/new_instance.py   2012-01-31 12:50:44.510525675 +0100
+++ b/new_instance.py   2012-01-31 13:05:52.110514861 +0100
@@ -1,6 +1,6 @@
 from sqlalchemy import *
 from sqlalchemy.orm import *
-from sqlalchemy.orm.attributes import manager_of_class
+from sqlalchemy.orm.attributes import manager_of_class, instance_state
 from sqlalchemy.ext.declarative import declarative_base
 
 
@@ -22,7 +22,11 @@
 Base.metadata.create_all(engine)
 session = sessionmaker(engine)()
 
-e = manager_of_class(Engineer).new_instance()
+manager = manager_of_class(Engineer)
+e = manager.new_instance()
+state = instance_state(e)
+manager.mapper._set_polymorphic_identity(state)
+
 session.add(e)
 session.commit()

But this is leaving the area of documented API. I found out that
SQLAlchemy actually initializes the polymorphic_identity column by
registering an on_init event. I'd rather trigger that but I did not yet
find out how to do that.

Thanks, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Stuttgart, HRB 733694
Managing director: Prof. Dr. Karl Schweizerhof, Dipl.-Math. Ulrich Franz


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Fun fact about logging in SA 0.6 (tl;dr: echo=False calls logging.getLogger(sqlalchemy.engine).setLevel(logging.WARN))

2011-09-15 Thread Torsten Landschoff
Hi *,

just figured that other people may be running into the same problem.

I just finally got my sqlalchemy engine to accept a new log level at
runtime. I followed the documentation and passed echo=False when
creating the engine.

Beforehand I had configured logging by

import logging
logging.basicConfig(logging.DEBUG)
logging.getLogger(sqlalchemy.engine).setLevel(logging.DEBUG)

But echo=False is not the default as stated in the create_engine
documentation at 

http://www.sqlalchemy.org/docs/06/core/engines.html?highlight=create_engine#sqlalchemy.create_engine

At commit 1adbbfddc979 in rel_0_6, the default in fact comes from
sqlalchemy/lib/sqlalchemy/engine/base.py class Engine:

def __init__(self, pool, dialect, url,
logging_name=None, echo=None, proxy=None,
execution_options=None
):

which triggers this code in log.py:

if echoflag is not None:
l = logging.getLogger(name)
...
elif echoflag is False:
l.setLevel(logging.WARN)
else:
l = logging.getLogger(name)

So this overwrites the log level I had set via logger.setLevel. This
seems to be fixed in 0.7 (where False and None are equivalent according
to a short glance).

So this will not give any output:

import logging
from sqlalchemy import *

logging.basicConfig(level=logging.DEBUG)
logging.getLogger(sqlalchemy.engine).setLevel(logging.DEBUG)

engine = create_engine(sqlite:///, echo=False)
engine.execute(select 1)

Changing echo=False in the create_engine call to echo=None (or
removing echo=False altogether) will fix this. Uargh!

Greetings, Torsten

PS: Just checked, in SQLAlchemy 0.7.3 there is no difference between
passing echo=None and echo=False.

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Make a copy of a select

2011-09-09 Thread Torsten Landschoff
On Tue, 2011-09-06 at 10:03 -0400, Michael Bayer wrote:
 the generative method on Select is select_from():
 
 http://www.sqlalchemy.org/docs/core/expression_api.html?highlight=select_from#sqlalchemy.sql.expression.Select.select_from

Thanks. The documentation on that methods says:

return a new select() construct with the given FROM expression applied
to its list of FROM objects.

For me this means that the old froms are replaced and not extended. I
would suggest to replace applied to with appended to or merged
into. This would make it clearer to me.

Looking at the source, select_from really just calls append_from on the
copied select.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Make a copy of a select

2011-09-06 Thread Torsten Landschoff
Hello *,

(Using SQLAlchemy 0.6.8, but planning to move to 0.7 soon)

I am trying to build queries dynamically and I have to add joins on the
primary table to be able to create my where-clauses.

To add the join clause on my instance of Select, I found this
discussion:

http://thread.gmane.org/gmane.comp.python.sqlalchemy.user/5910/focus=5917

Now I need to retain the original Select untouched (as it can be used to
derive other Select instances). I found out that all the interesting
methods on Select deliver a new copy using the @_generative decorator.

For append_from, there does not seem to be a generative equivalent.
Therefore I want to roll it manually, but I don't really want to call
the private _generate method.

Therefore I am wondering if there is a canonical way to copy an instance
of Select. It seems that pickling is supported (__getstate__ and
__setstate__ are provided).

However, the class 'sqlalchemy.sql.expression._SelectBaseMixin overrides
_generate to to also clear out exported collections. However, the
__getstate__ implementation comes from ClauseElement.


My question: Is it supported to do

from copy import copy
new_select = copy(old_select)

where old_select is an instance of Select? Are there any plans to add a
generative way to extend the froms of a Select? ;-)


Thanks and Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Updating user interface after rollback (using after_rollback SessionExtension?)

2011-08-02 Thread Torsten Landschoff
Hi Michael,

On Mon, 2011-08-01 at 14:18 -0400, Michael Bayer wrote:

 The Session's default behavior is to expire fully the state present after a 
 rollback occurs.   The next access of any attribute will guaranteed pull 
 fresh from the DB in a new transaction. 
 
 I'm assuming you know this, and that there is something additional here 
 you're looking for, like an event to the GUI to actually re-access the 
 attributes, or something.

Exactly.

 A surprise for me.Yes the after_rollback() seems to only fire
 after an actual database rollback.  This is a bug in that there is no
 way to catch soft rollbacks.  I can't change the event, as it is
 integral in transactional extensions like that supplied for Zope; a
 new one will have to be added and the docs for after_rollback() will
 need to be updated.

Actually I expected after_rollback to fire only after an actual DB
rollback. Or did I misinterpret the errors in that the rollback came
before a flush to the database and therefore no actual reload from the
database would be needed?

 Ticket #2241 is added for this and is completed.   You can now use the
 0.7 tip and use the after_soft_rollback() event in conjunction with
 the is_active flag:
 
 @event.listens_for(Session, after_soft_rollback)
 def do_something(session, previous_transaction):
 if session.is_active:
 session.execute(select * from some_table)

I looked at the is_active flag in SQLAlchemy 0.6.8 already and I think I
can only use it with autocommit disabled. Unfortunately I use
autocommit=True.

I know that's a bad choice from the SQLAlchemy point of view. The reason
why I am using it is that I want to avoid to keep the database locked
over a long time, as this makes our background processes fail
(OperationalError: Database is locked). I am a victim of the limitations
of SQLite here :-(

Perhaps after the current stabilization phase, I will move to SA 0.7 and
autocommit=False. I am not sure about the latter though.

Thanks and Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Updating user interface after rollback (using after_rollback SessionExtension?)

2011-08-01 Thread Torsten Landschoff
Hello world,

I am tearing my hair about my SQLAlchemy usage in a GUI application,
especially to get it error tolerant.

Usually the GUI filters user inputs before they are thrown at SQLAlchemy
to store it into the database. If that fails, however, it can happen
that data is thrown at the database that leads to a rollback because of
e.g. violated foreign key constraints.

If that happens, the session rolls back (fine) but the GUI still shows
the state that I tried to write into the database. Being MVC, I would
need all database object to fire a changed event so they are pulled
fresh from the database.

I tried using the after_commit extension (SQLAlchemy 0.6.8) to do this.
Paraphrased, this works like this:

def after_rollback(self, session):
for instance in session:
instance.signal_changed()

This works fine if the transaction being rolled back is the toplevel
transaction. However, if that is a nested transaction, this destroys my
session state (as the listeners try to load from the database):

InvalidRequestError: This Session's transaction has been rolled back by
a nested rollback() call.  To begin a new transaction, issue
Session.rollback() first.

So I would need the callback only after the rollback of the toplevel
session occurred. I tried looking at the _is_transaction_boundary
property of session.transaction:

def after_rollback(self, session):
if session.transaction._is_transaction_boundary:
for instance in session:
instance.signal_changed()

This stops the exceptions, but it also never comes to signal_changed if
subtransactions are rolled back. It looks like I only get a single event
for the innermost transaction?!


What would be a sane way to implement this? I also looked at the
MapperExtension in the hope that there is a callback whenever an
instance is expired, but does not seem to be such a thing.

Hints welcome,

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] with_polymorphic mapper option for future classes

2011-07-20 Thread Torsten Landschoff
Hi *,

is there a way to use the with_polymorphic mapper option for classes yet
to be defined?

My problem is that I have a base class for a number of different
database objects. There are few specific leafs classes of which many
instances are loaded at once.

Currently, I pass with_polymorphic=(*, None) while mapping, but that
creates huge query for little effect. I would like to pass the two
classes I am interested in but they are defined in other modules (which
depends on the module where I map the base class).

Is there a way to update the with_polymorphic setting on the base class
when mapping the derived class?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] correlate between ORM and non-ORM expression

2011-07-12 Thread Torsten Landschoff
Hi Michael et al,

I can not find a way to express my query in SQLAlchemy. What I want to
do is to load all ORM-mapped objects for which another query can not
yield any result (not exists):

  select * from entry where not exists (select 1 from lookup where 
lookup.skipped_id = entry.id)

For my application, the subquery is a bit more complicated (it's a join
over two tables). In any case, I can not correlate the ORM query with
the sql query.

The attached example prints the following query (SQLAlchemy 0.6.8):

SELECT entry.id AS entry_id, entry.content AS entry_content 
FROM entry 
WHERE NOT (EXISTS (SELECT 1 
FROM lookup))

How can I correlate the subquery in this context? There is a bunch of
correlate methods (Query.correlate, Select.correlate) but I do not
really understand how to make use of it here.

Do you have a hint for me?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *

metadata = MetaData()

entry_table = Table(entry, metadata,
Column(id, Integer, primary_key=True),
Column(content, String)
)

lookup_table = Table(lookup, metadata,
Column(skipped_id, Integer, ForeignKey(entry_table.c.id)))

class Entry(object): pass
mapper(Entry, entry_table)

engine = create_engine(sqlite:///, echo=True)
metadata.create_all(engine)

Session = sessionmaker(engine)
session = Session()

for v in (1, 3, 7, 9):
session.execute(lookup_table.insert().values(skipped_id=v))

session.query(Entry).filter(
not_(exists(1, from_obj=lookup_table)
.correlate(entry_table))).all()


Re: [sqlalchemy] correlate between ORM and non-ORM expression

2011-07-12 Thread Torsten Landschoff
On Tue, 2011-07-12 at 17:47 +0200, Torsten Landschoff wrote:

 How can I correlate the subquery in this context? There is a bunch of
 correlate methods (Query.correlate, Select.correlate) but I do not
 really understand how to make use of it here.

Okay, I found it: correlate does not automatically use the foreign keys
to add a where clause for correlation. This is left to the caller.

Adding

.where(entry_table.c.id==lookup_table.c.skipped_id)

to my example make it work.

Thanks!

Torsten


-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy mapped classes and __new__ (ClassManager.new_instance undocumented)

2011-06-29 Thread Torsten Landschoff
Hi Michael, 

On Tue, 2011-06-28 at 10:28 -0400, Michael Bayer wrote:

  ClassManager class is not documented. What should I be using instead?
 
 Instrumentation has to establish state on a new object independent of
 __new__() - during pickling, the state is restored naturally as
 __dict__ is restored, during fetch of rows, new_instance() is used,
 during normal construction, __init__() is used.   
 
 class_manager() is documented we'd only need to get new_instance() and
 the use case documented, seems to me that would be bug fixed.

Fine with me. Thanks! :-)

Still I wonder why __init__ is still supported when using ORM mapped
classes and __new__ is not. Is there any reason why the latter is harder
to support? Or is it a matter of too little gain (who is using __new__
anyway?) for too much work?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] UniqueConstraint breaks combination of ordering_list and association_proxy

2011-06-29 Thread Torsten Landschoff
Hi Michael, hi *,

here is another issue I ran into with SQLAlchemy. Basically, I am trying
to map a filesystem like structure to SQL. Unfortunately, there is a
difference in that the users can reorder the tree. I reduced my code to
the attached example.

Mapping the structure worked quite good so far, until I noticed that I
had duplicate entries in the same folder. So I added a unique constraint
with the result that I can't overwrite the list of entries anymore:

folder.children = list(folder.children)

alone causes the problem. SQLAlchemy adds the 'new' entries first, which
violates the unique constraint.

Is there a way to do what I want without bigger changes to the code? For
now, I will disable the unique constraint again and add some checks to
the Python code.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


Example of using association_proxy with ordering_list and how to break it ;-)

Basically, assigning the same entries again in a different order will violate
the unique constraint below. Remove it and you see the reason:

 INSERT INTO folder_entry (folder_id, entry_id, entry_order) VALUES (?, ?, ?)
 (2, 4, 0)
 INSERT INTO folder_entry (folder_id, entry_id, entry_order) VALUES (?, ?, ?)
 (2, 3, 1)
 DELETE FROM folder_entry WHERE folder_entry.surrogate_key = ?
 ((2,), (3,))

The new entries are inserted first and the old entries are deleted last.


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.orderinglist import ordering_list
from sqlalchemy.ext.associationproxy import association_proxy

Base = declarative_base()


class Entry(Base):
__tablename__ = entry
id = Column(Integer, primary_key=True)
entry_type = Column(String)
__mapper_args__ = {'polymorphic_on': entry_type}
name = Column(String)

class File(Entry):
__tablename__ = file
__mapper_args__ = {'polymorphic_identity': file}
id = Column(Integer, ForeignKey(entry.id), primary_key=True)
content = Column(LargeBinary)

class FolderEntry(Base):
__tablename__ = folder_entry
surrogate_key = Column(Integer, primary_key=True)
folder_id = Column(Integer, ForeignKey(folder.id), index=True, nullable=False)
entry_id = Column(Integer, ForeignKey(entry.id), nullable=False)
entry_order = Column(Integer)
entry = relation(Entry)
__table_args__ = (UniqueConstraint(folder_id, entry_id), {})

def __init__(self, entry):
uConstructor for association_proxy, which passes only association target.
self.entry = entry

class Folder(Entry):
__tablename__ = folder
__mapper_args__ = {'polymorphic_identity': folder}
id = Column(Integer, ForeignKey(entry.id), primary_key=True)

children_relation = relation(FolderEntry,
order_by=[FolderEntry.entry_order],
cascade='save-update,merge,delete,delete-orphan',
collection_class=ordering_list(entry_order))

children = association_proxy(children_relation, entry)


engine = create_engine(sqlite:///, echo=True)
Base.metadata.create_all(engine)

Session = sessionmaker(engine)
session = Session()

root = Folder(name=root)
src = Folder(name=src)
root.children = [src]
src.children = [File(name=test.py, content=# Some content), File(name=bar.py, content=# More content)]
session.add(root)
session.commit()

src.children = list(src.children)
session.commit()


Re: [sqlalchemy] SQLAlchemy mapped classes and __new__ (ClassManager.new_instance undocumented)

2011-06-29 Thread Torsten Landschoff
On Wed, 2011-06-29 at 15:31 -0400, Michael Bayer wrote:

 I'm not sure offhand if __new__() can be wrapped in a monkeypatch the
 way we do for __init__(), or if so what are the side effects of that,
 and it also would mean there's no way to create a new instance
 without ._sa_instance_state being tacked on.Particularly with
 pickling this is problematic.

Good point, I missed that __new__ might be needed for unpickling.
Thanks!

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SQLAlchemy mapped classes and __new__ (ClassManager.new_instance undocumented)

2011-06-28 Thread Torsten Landschoff
Hi *,

a while ago I noticed a small problem with SQLAlchemy. I was able to
work around this, but I am still wondering if this should be required.

I am doing synchronization between multiple databases (think distributed
VCS). Basically, each outdated object on the receiving side is updated
by updating its variables and committing it to the database.

Now there is some required information in those objects which is checked
in the __init__ method of each class. Therefore to create an object from
the remote object, I am skipping the call to __init__ (like e.g. pickle
does).

(Interestingly, pickle creates an empty class first and goes to update
__class__ afterwards. Why?!)

So to create the instances for the mapped objects, I used

  instance = MyClass.__new__(MyClass)

as in the attached example. This fails with an attribute error for
_sa_instance_state. My work around is to use 

  instance = manager_of_class(MyClass).new_instance()

but I am wondering if this should be needed, especially since the
ClassManager class is not documented. What should I be using instead?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm.attributes import manager_of_class

metadata = MetaData()

class Base(object): pass

base_table = Table(base, metadata,
Column(id, Integer, primary_key=True),
Column(name, String))

mapper(Base, base_table)

b = Base()
assert b.name is None

b = manager_of_class(Base).new_instance()
b.name = using manager_of_class

b = Base.__new__(Base)
b.name = using __new__


[sqlalchemy] Using identity_key in after_commit session event (SQLAlchemy 0.6.7)

2011-05-26 Thread Torsten Landschoff
, in _connection_for_bind
self._assert_is_active()
  File 
/usr/local/lib/python2.6/dist-packages/SQLAlchemy-0.6.7dev-py2.6.egg/sqlalchemy/orm/session.py,
 line 247, in _assert_is_active
This Session's transaction has been rolled back 
InvalidRequestError: This Session's transaction has been rolled back by a 
nested rollback() call.  To begin a new transaction, issue Session.rollback() 
first.

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Using identity_key in after_commit session event (SQLAlchemy 0.6.7)

2011-05-26 Thread Torsten Landschoff
On Thu, 2011-05-26 at 08:34 +0200, Torsten Landschoff wrote:
 basic question: Can I call identity_key in after_commit? If not, are there 
 any alternatives?

A bit of extra information: My usage of identity_key stems from this
thread on this list:

http://groups.google.com/group/sqlalchemy/browse_frm/thread/f62edf05c2696723/a3fc24f29bad7123?lnk=gstq=identity_key#a3fc24f29bad7123

(Message-Id is 1276873541.4601.19.camel@sharokan.intern if anybody
cares)

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] AttributeError: 'Preprocess' object has no attribute 'detach' during Session.close

2011-05-16 Thread Torsten Landschoff
On Tue, 2011-05-03 at 09:45 -0400, Michael Bayer wrote:
 It would appear as though a mapped class, I'm assuming called Preprocess, 
 is being interpreted as an instance state, an internal ORM construct that 
 would normally be attached to the mapped object.I'm not familiar offhand 
 with any documented usage which could cause that.   If you were to somehow 
 get your mapped object injected into the identity map incorrectly, though 
 you'd have to try fairly hard to pull that off since it guards against 
 incorrect incoming state. 
 
 On May 3, 2011, at 4:12 AM, Torsten Landschoff wrote:
 
  Hi *,
  
  has somebody seen this error and maybe an idea how I ended up triggering
  it? ;-)
  
  File 
  /home/dynamore/loco2/deploy/linux64_rhel5.4/build/pyi.linux2/loco2/outPYZ1.pyz/sqlalchemy.orm.session,
   line 746, in close
   File 
  /home/dynamore/loco2/deploy/linux64_rhel5.4/build/pyi.linux2/loco2/outPYZ1.pyz/sqlalchemy.orm.session,
   line 766, in expunge_all
  AttributeError: 'Preprocess' object has no attribute 'detach'
  

Interestingly, updating the build machine fixed this problem. No idea
what caused it. :-(

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Relationship wrt. inheritance problem (and suggested patch)

2011-05-16 Thread Torsten Landschoff
On Fri, 2011-04-29 at 20:14 -0400, Michael Bayer wrote:
  
  Do you want me to write a patch including a test case for the test
  suite?
 
 sure if you're inclined

Sorry for leaving that slip. I just noticed that you fixed it and in a
much better way than I would have been able to come up with.

Thank you very much!

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] AttributeError: 'Preprocess' object has no attribute 'detach' during Session.close

2011-05-03 Thread Torsten Landschoff
Hi *,

has somebody seen this error and maybe an idea how I ended up triggering
it? ;-)

File 
/home/dynamore/loco2/deploy/linux64_rhel5.4/build/pyi.linux2/loco2/outPYZ1.pyz/sqlalchemy.orm.session,
 line 746, in close
  File 
/home/dynamore/loco2/deploy/linux64_rhel5.4/build/pyi.linux2/loco2/outPYZ1.pyz/sqlalchemy.orm.session,
 line 766, in expunge_all
AttributeError: 'Preprocess' object has no attribute 'detach'

This is for SQLAlchemy 0.6.5 as I just noticed - I have to update the
deploy machine.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Relationship wrt. inheritance problem (and suggested patch)

2011-04-29 Thread Torsten Landschoff
On Thu, 2011-04-28 at 11:36 -0400, Michael Bayer wrote:
 funny, that's absolutely a bug and it works in declarative because the
 same fix you have there was already applied some time ago within
 declarative.   this is #2153 targeted at 0.6.8 so I will get to it
 when i make a pass through a set of small 0.6.8 issues later in the
 week.

Great, thanks.

FYI, I ran the test suite with my trivial patch and it passed all tests.
I guess it was using only SQLite here, but that should be independent of
the backend.

Do you want me to write a patch including a test case for the test
suite?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: sqlite transaction isolation, select for update, race condition

2011-04-29 Thread Torsten Landschoff
Hi Clay,

On Wed, 2011-04-27 at 11:21 -0700, Clay Gerrard wrote:

 ... and all connections would automatically issue the correct begin
 statement and acquire a reserved lock at the beginning of the
 transaction.  But as it is, they don't do anything until they get down
 to the update, and it's kind of a disaster.

I had this disaster a number of times as well. For that reason I created
a patch for this issue http://bugs.python.org/issue10740

That change makes the time when pysqlite starts a transaction
configurable. The obvious approach to start a transaction on each
command breaks pragma foreign_keys=on since it has to be used outside
a transaction.

I welcome comments and suggestions about that patch.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Relationship wrt. inheritance problem (and suggested patch)

2011-04-28 Thread Torsten Landschoff
()
---

However, I already failed to do the whole conversion in one giant commit, 
therefore I would prefer to do it in small steps.


-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Running an ORM query during Session after_attach event

2010-12-21 Thread Torsten Landschoff
On Mon, 2010-12-20 at 10:16 -0500, Michael Bayer wrote:
 On Dec 20, 2010, at 3:42 AM, Torsten Landschoff wrote:

  INSERT INTO group_items (group_id, item_id, item_order) VALUES (?, ?, ?)
  (None, None, None)
  
  Both the group and the item where created correctly so I ended up with a
  database which has all the data, just missing their relationships ;-)
  
  
  I reverted to running the query when first connecting to the database.
  What I would like to know: Is it officially impossible to run queries
  from session events? What session functionality is usable from within
  session events? (I know that session.{dirty, new, deleted} works fine).
 
 I can't think of anything that would go wrong from running a Query
 inside of the events - the flush process itself runs queries.   Also
 in the before_flush() event you can also change the state of the
 session to any degree.

Good to know, thanks!

 In this case, we have another verb besides Query, which is
 replace, as in replace a None reference.   That sounds like the
 thing you're doing that changes state, likely in an unexpected way
 based on your description.   Would need to understand what you mean by

The part of replacing the None reference still happens with preloading
and seems to be okay now.

Basically I have a field format attached to the item that is inserted.
This is a relation (via Column format_id) into a data format table.
Most of the time, the data will be in the default format which is
therefore automatically set if not overridden.

So in the after_attach event, I set the format relation to the default
format in the database.

Now this is done via

class MySession(Session):
...
def __init__(self, ...):
Session.__init__(...)
self._format, = self.query(DataFormat).filter(...)
@property
def default_format(self):
return self._format

In after_attach:

...
if instance.format is None:
instance.format = session.default_format

I would like to have the database supply the default value, but is is
user-configurable. Interestingly, in any case the format is set
correctly. But the group_item table update is broken.

  replace here and under what codepath that's occurring.   Under a
 normal attach event via a userland add() it should be OK.

So that might be the problem then. Attaching the instance to the session
works by the save-update cascade: Items are added to group.children
which models the group_items relation. I assumed that adding entries to
that relation will cause an add() and therefore the after_attach
callback. Is after_attach only called when flushing?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Running an ORM query during Session after_attach event

2010-12-20 Thread Torsten Landschoff
Hi all.

I just stumbled across an after_attach event handler in my code. That
handler used to replace a None reference with a persistent instance that
was already loaded.

After a few changes, it ended up doing a query to load that instance. It
seems this messes up the session state, which caused a query to change
from

INSERT INTO group_items (group_id, item_id, item_order) VALUES (?, ?, ?)
(242, 244, 0)

into the less usefull

INSERT INTO group_items (group_id, item_id, item_order) VALUES (?, ?, ?)
(None, None, None)

Both the group and the item where created correctly so I ended up with a
database which has all the data, just missing their relationships ;-)


I reverted to running the query when first connecting to the database.
What I would like to know: Is it officially impossible to run queries
from session events? What session functionality is usable from within
session events? (I know that session.{dirty, new, deleted} works fine).

Thanks and happy holidays,

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Adding business logic to Session by inheritance?

2010-11-22 Thread Torsten Landschoff
Hi again!

This questions can probably only be answered by one of the developers of
SQLAlchemy.

I need to store information that is valid for the whole session in a
sane place, together with accessor methods. I ended up extending the
orm.Session class and passing my class via class_=MySession to the
sessionmaker.

Originally I added only 2 fields and 4 methods to the Session but it
seems like this will grow a bit. Is it okay to add a whole bunch of
methods to my Session class? Any hints how to avoid name clashes for
future SQLAlchemy versions?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Adding business logic to Session by inheritance?

2010-11-22 Thread Torsten Landschoff
Hi Michael,

Am Montag, den 22.11.2010, 11:18 -0500 schrieb Michael Bayer:
  Originally I added only 2 fields and 4 methods to the Session but it
  seems like this will grow a bit. Is it okay to add a whole bunch of
  methods to my Session class? Any hints how to avoid name clashes for
  future SQLAlchemy versions?
 
 yeah there's no magic bullet here, you can add as many methods as you like 
 and each one raises the chance of a future collision by a tiny amount.  We 
 don't add too many methods to Session so its not terribly dangerous, but you 
 could consider building a Session facade of your own which delegates to the 
 real Session internally.  If I were really adding a comprehensive 
 business-centric persistence layer I'd almost certainly do that.

I originally had such a facade, but I dumped it because I did not want
to reimplement the scoped session adapter. Any hints how to get the best
of both worlds? ;-)

Thanks, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Avoiding mutable column types

2010-11-22 Thread Torsten Landschoff
Hi again!

Sorry for all the questions, here is another one.

What I really would like to do is to read a chunk of data for a mapped
instance and disseminate that into a number of attributes (not the usual
one column - one attribute mapping). This could easily be done using
an orm.reconstructor.

Writing to that data triggers an internal flag which I would like the
ORM to take into account to detect that the instance is dirty (even
better: signal the ORM at the same time that it is dirty). During flush,
I would like to have a orm.deconstructor ;-) method run that collects
the data from the instance attributes for flushing it to the database.

I am far from having this implemented. I tried starting with a simple
example using the MutableType mixin:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.types import TypeDecorator, MutableType
import Image
import ImageEnhance
import numpy
from cStringIO import StringIO

engine = create_engine(sqlite:tmp/test.db, echo=True)

class _ImageType(TypeDecorator):
impl = LargeBinary
def process_bind_param(self, value, dialect):
im = Image.fromarray(value)
fakefile = StringIO()
im.save(fakefile, 'png')
return fakefile.getvalue()
def process_result_value(self, value, dialect):
fakefile = StringIO(value)
return numpy.array(Image.open(fakefile))

class ImageType(MutableType, _ImageType):
def copy_value(self, value):
print creating a copy
return numpy.copy(value)
def compare_values(self, x, y):
print compare: {0} vs. {1}.format(id(x), id(y))
result = (x == y).all()
print result: {0}.format(result)
return result

metadata = MetaData()
image_table = Table(images, metadata,
Column(id, Integer, primary_key=True),
Column(image, ImageType))
metadata.create_all(engine)

class StoredImage(object):
def __init__(self, fname):
self.image = numpy.array(Image.open(fname))

mapper(StoredImage, image_table)

Session = sessionmaker(engine)
session = Session()

images = session.query(StoredImage).all()
if images:
for im in images:
data = im.image
for row in xrange(len(data)):
for col in xrange(len(data[0])):
r, g, b = data[row][col]
data[row][col] = b, g, r
# Image.fromarray(im.image).show()
print dirty, session.dirty
else:
im = StoredImage(/usr/share/backgrounds/Daisy.jpg)
session.add(im)
session.commit()


This lacks almost everything that I really want:

o data is still in a single attribute
o no dirty flag handling, but full content comparison
o the comparison is run each time the ORM looks up its dirty instances.


Just pondering...

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Something like orderinglist for secondary tables?

2010-11-18 Thread Torsten Landschoff
Hi *,

I am fighting half a day with something I expected to be trivial: Keep
the order of items in a collection implemented vi a secondary table
(many-to-many relationship).

Basically, I have a Collection class with a relationship to Items in the
collection. That relationship is configured via

items=relation(Item,
secondary=collection_item_table,
order_by=[collection_item_table.c.item_order])

Now my problem is: How to update the item_order column in the table? So
far I did not find any way to do that. For a many-to-one relationship,
orderinglist will do this just fine.

I tried using a MapperExtension that goes through the relevant rows in
the secondary table and updates the item_order column (code attached).
It turns out that the after_insert and after_update extension points are
called before the child collections are flushed, so the primary keys of
any new items are not available at that time. Apart from that, it is a
bit heavy on the database to update all the item rows for a selection on
each change...

Another approach I tried was to use replace the relationship via custom
code querying the relation when reconstructing a collection instance.
The loading part works fine but I failed in populating the
collection_item_table in the first place, as the items must be flushed
before the Collection for the item_id to be available and I did not find
a way to tell SQLAlchemy of this dependency.

Any hint on how to do this is greatly appreciated.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

#! /usr/bin/python

from sqlalchemy import *
from sqlalchemy.orm import *

# Set up the tables
meta = MetaData()
collection_table = Table(collection, meta,
Column(collection_id, Integer, primary_key=True))
item_table = Table(item, meta,
Column(item_id, Integer, primary_key=True),
Column(name, String))
collection_item_table = Table(collection_item, meta,
Column(collection_id, ForeignKey(collection_table.c.collection_id)),
Column(item_id, ForeignKey(item_table.c.item_id)),
Column(item_order, Integer))

# Mapped classes
class Collection(object):
def shallow_copy(self):
new = Collection()
new.items = self.items
return new

class Item(object):
def __init__(self, name):
self.name = name

# In a function to test without committing (which will work of course)
def maybe_commit(session):
session.commit()
pass


class CorrectOrderExtension(MapperExtension):
Updates the order of the entries in the collection_item_table to match with
the order in the items field of a Collection instance.

Does not work if items were not flushed before the Collection - how to force
flushing order? I would have expected that after_update/after_insert are called
after all collection attributes are completely written out.

def after_update(self, mapper, connection, instance):
update = collection_item_table.update().where(
collection_item_table.c.collection_id==bindparam('b_collection')).where(
collection_item_table.c.item_id==bindparam('b_item_id')
).values(item_order=bindparam('b_item_order'))

collection_id = instance.collection_id
index = 0
updates = []
for item in instance.items:
item_id = item.item_id
assert item_id
updates.append(dict(
b_collection=collection_id,
b_item_id=item_id,
b_item_order=index))
index += 1

if updates:
connection.execute(update, updates)
return EXT_CONTINUE

def after_insert(self, mapper, connection, instance):
return self.after_update(mapper, connection, instance)


# Do the Object Relational Mapping
mapper(Item, item_table)
mapper(Collection, collection_table,
extension=CorrectOrderExtension(),
properties=dict(
items=relation(Item,
secondary=collection_item_table,
order_by=[collection_item_table.c.item_order])))

# Open database
engine = create_engine(sqlite:///, echo=True)
Session = sessionmaker(bind=engine)
meta.create_all(engine)

# Some items to play with
items = [Item(name) for name in (foo, bar, baz, qux)]

session = Session()
c = Collection()
c.items = items[:3]
session.add(c)
maybe_commit(session

Re: [sqlalchemy] Something like orderinglist for secondary tables?

2010-11-18 Thread Torsten Landschoff
Hi Michael,

Thanks for your lightning fast reply!

On Thu, 2010-11-18 at 10:17 -0500, Michael Bayer wrote:

 this is correct.  The functionality provided by secondary is that SQLA will 
 maintain a table with foreign keys to the related primary keys on either 
 side.  It does not do anything at all with additional columns on the 
 secondary table.   If your secondary table has additional columns you 
 need to deal with, you no longer use secondary and instead use the 
 association object pattern : 
 http://www.sqlalchemy.org/docs/orm/relationships.html#association-object .
 To make this pattern act more like secondary in the usual case, you use 
 associationproxy:  
 http://www.sqlalchemy.org/docs/orm/extensions/associationproxy.html
 
 You could most likely use the existing orderinglist extension in conjunction 
 with associationproxy to maintain the ordering you want, in Python.

Okay, I updated my example code and it actually works now. However, it
feels like a lot of additional complexity just for adding order.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

#! /usr/bin/python

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.orderinglist import ordering_list
from sqlalchemy.ext.associationproxy import association_proxy

meta = MetaData()

collection_table = Table(collection, meta,
Column(collection_id, Integer, primary_key=True))

item_table = Table(item, meta,
Column(item_id, Integer, primary_key=True),
Column(name, String))

collection_item_table = Table(collection_item, meta,
# Need a surrogate key to allow duplicate entries in the list.
Column(id, Integer, primary_key=True),
Column(collection_id, ForeignKey(collection_table.c.collection_id), index=True),
Column(item_id, ForeignKey(item_table.c.item_id)),
Column(item_order, Integer))


class Collection(object):
def shallow_copy(self):
new = Collection()
new.items = self.items
return new

items = association_proxy(items_relation, item)

class Item(object):
def __init__(self, name):
self.name = name

class CollectionItemAssociation(object):
def __init__(self, item):
self.item = item

def maybe_commit(session):
session.commit()
pass

mapper(Item, item_table)
mapper(CollectionItemAssociation, collection_item_table, properties=dict(
item=relation(Item)))
mapper(Collection, collection_table, properties=dict(
items_relation=relation(
CollectionItemAssociation,
order_by=[collection_item_table.c.item_order],
collection_class=ordering_list('item_order'),)))

engine = create_engine(sqlite:///, echo=True)
Session = sessionmaker(bind=engine)
meta.create_all(engine)

items = [Item(name) for name in (foo, bar, baz, qux)]

session = Session()
c = Collection()
c.items = items[:3]
session.add(c)
maybe_commit(session)

c.items[1] = items[3]
maybe_commit(session)

assert [x.name for x in c.items] == [foo, qux, baz]

c.items.append(c.items[0])
maybe_commit(session)
assert [x.name for x in c.items] == [foo, qux, baz, foo]


[sqlalchemy] Insert from select?!

2010-11-03 Thread Torsten Landschoff
Hi *,

I am wondering if there is a way to generate an insert from select with
SQLAlchemy. Consider this code:


from sqlalchemy import *

engine = create_engine(sqlite:///demo.ldb, echo=True)

md = MetaData()
users = Table(users, md,
Column(id, Integer, primary_key=True),
Column(name, String),
Column(status, Integer))
md.create_all(engine)

engine.execute(users.insert().values(name=Joe Sixpack, status=1))
engine.execute(users.insert().values(name=Jane Citizen, status=1))

# insert from select?
# engine.execute(users.insert().values(
#  users.select([users.c.name]), status=2))

conn = engine.connect()
with conn.begin():
for row in conn.execute(users.select()).fetchall():
conn.execute(users.insert().values(
name=row[name], status=2))


Is there a way to generate the natural SQL for this:


insert into users (name, status) select name, 2 as status from users

Basically I would like to tell SQLAlchemy to use a query to provide the
values for insert. StackOverflow says this can't be done, but I can't
believe that. :-)

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Insert from select?!

2010-11-03 Thread Torsten Landschoff
On Wed, 2010-11-03 at 17:16 +0530, akm wrote:
 Try this
 http://www.sqlalchemy.org/docs/05/reference/ext/compiler.html?highlight=compiler#compiling-sub-elements-of-a-custom-expression-construct

Thanks for the pointer, missed that before. So it is in fact doable :)

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLite: Rolling back DDL requests

2010-11-02 Thread Torsten Landschoff
Hi Michael,

I only noticed the activity on this thread today.

On Sat, 2010-08-14 at 11:07 -0400, Michael Bayer wrote:

  reproduce the problem and it suggests the problem stems from some
  behaviour of transactions or of the engine.base.Connection class.  I
  don't quite know what to make of it yet, but I think it shows that the
  effect of passing it through is being counteracted by something
  else.  I ran this on Python 2.6.5 with SA 0.6.3 on Windows.
 
 Postgresql and MS-SQL support transactional DDL, and I use this feature all 
 the time with SQLAlchemy which does nothing special to support them.

That is out of the question. This seems only related to SQLite.

 In addition, isolation_level=None with pysqlite disables the DBAPIs entire 
 transactional system.  SQLAlchemy relies upon this system to handle proper 
 transactional behavior.  Per their documentation, some statements will fail 
 if executed in a transaction - SQLAlchemy does not want to be involved in 
 reorganizing how the DBAPI wants to approach things, emitting manual 'begin' 
 and 'commit' strings, etc.

Reading the pysqlite source, all that this magic isolation_level setting
does is to emit an begin statement in front of UPDATE, DELETE, INSERT
and REPLACE commands. And implicitly commit whenever any other
non-select statement is executed (which includes SAVEPOINT commands!).

I don't think it makes a huge difference for SQLAlchemy to rely on this
insertion of begin statements.

References:

Emitting begin (via _pysqlite_connection_begin):
http://code.google.com/p/pysqlite/source/browse/src/cursor.c?r=2.6.0#598

That's the only invocation to _pysqlite_connection_begin. After reading
a bit more, I can see your point. Setting isolation_level to None
actually disables the commit and rollback methods on cursor objects.
Whee!


 So I consider this a pysqlite bug, and they should offer a mode by which 
 there is no implicit commit for CREATE TABLE.
 
 See http://docs.python.org/library/sqlite3.html#controlling-transactions 

Agreed.

Greetings, Torsten


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Storing lists of simple strings

2010-10-28 Thread Torsten Landschoff
Hi Michael,

Am Mittwoch, den 27.10.2010, 13:37 -0400 schrieb Michael Bayer:


 Configuration is too unwieldy ?   Since you're using declarative ,
 just using @declared_attr would give you access to the class:

Nice one. I missed classproperty and declared_attr so far.

 def make_ordered_list(key, pk, type_):
 @declared_attr
 def go(cls):
 class Name(Base):
 __tablename__ = %s_names % key
 rel_id = Column(related_id, Integer, ForeignKey(pk), 
 primary_key=True, index=True)
 position = Column(position,  Integer)
 value = Column(value, type_, primary_key=True)
 def __init__(self, value):
 self.value = value
 private_key = _ + key
 
 setattr(cls, key, association_proxy(private_key, value))

Now that's a hefty trick, updating the class when reading the property.
I think this should be protected against running twice!? Or does the ORM
mapper do this for us?
 
 if you wanted to get rid of saying _names, you need to create
 something that is added after the fact:

Yep, or add a placeholder into the class and have the mapper replace it
by the real thing. I'll look into that!

Thanks for all you support wrt. SQLAlchemy, I am always impressed how
you can come up with helpful answers in virtually no time. I tried the
Donate button on SA.org so you can have a beer on my expenses :-)

Greetings, Torsten


-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Storing lists of simple strings

2010-10-28 Thread Torsten Landschoff
Hi Michael,

Am Donnerstag, den 28.10.2010, 11:36 -0400 schrieb Michael Bayer:

 Then the mapper is constructed which immediately replaces your function with 
 an ORM instrumented attribute.   So basically yes its a one shot.

Thanks for the clarification.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: alchemy-migrate with DeclarativeBase

2010-07-22 Thread Torsten Landschoff
Hi Shane,

Sorry, I missed your reply in the remaining messages on the sqlalchemy
list.

On Wed, 2010-06-23 at 13:24 -0700, Shane wrote:
 How would you define the Account class?  Calling create on my
 DeclarativeBase Account object was one of the first things I tried,
 but I keep getting:
 
 AttributeError: type object 'Account' has no attribute 'create'
 
 Maybe a version difference in sqlalchemy?

I don't have an account class, and in fact I don't know why I
capitalized Account in my email. In fact, the variable is called
account_table, so with declarative I would probably be Account.__table__
or something like that.

In fact I just ran the attached python code which works just fine.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base

base = declarative_base()
class Account(base):
__tablename__ = accounts
id = Column(Integer, primary_key=True)
name = Column(String)

engine = create_engine(sqlite:///:memory:, echo=True)
Account.__table__.create(engine)


[sqlalchemy] SQLite: Rolling back DDL requests

2010-06-24 Thread Torsten Landschoff
Hi *,

This is as much a question as it is a write up of the issue I am
fighting with. To summarize: my problem was caused by the sqlite3 python
module inserting commits automatically. I think it should not not that!!


today I ran into a problem with rolling back DDL requests to SQLite.
Interestingly, this works just fine from the sqlite3 command line
utility.

Here is a minimal example to illustrate:

-
from sqlalchemy import engine

engine = create_engine(sqlite:///test.sqlite, echo=True)
conn = engine.connect()
txn = conn.begin():
conn.execute(create table demo (foo varchar, bar varchar))
txn.rollback()
-

I am actually using DDL instances for those requests and they are
creating triggers, but either way, nothing is rolled back.

The output is:

INFO sqlalchemy.engine.base.Engine.0x...f090 BEGIN
INFO sqlalchemy.engine.base.Engine.0x...f090 create table demo (foo
varchar, bar varchar)
INFO sqlalchemy.engine.base.Engine.0x...f090 ()
INFO sqlalchemy.engine.base.Engine.0x...f090 ROLLBACK

Typing the exactly same commands into the sqlite3 command line interface
shows that it is in fact rolling back fine.

I reproduced the same thing using the sqlite3 python bindings directly:

-
from sqlite3 import connect
c = connect(test.sqlite)
c.execute(begin)
c.execute(create table demo (foo varchar, bar varchar))
c.rollback()
-

Same problem. Now, I can't really infer from the sqlite3 documentation
how transactions are managed. After a bit of experiment, I found out
that passing isolation_level=None to the sqlite3.connect function gives
me the expected behaviour: My changes are rolled back.

However, when passing this same option to create_engine, it has no
effect. Which does not surprise me given that the documentation at
http://www.sqlalchemy.org/docs/reference/dialects/sqlite.html only talks
about sending a pragma down to each sqlite connection, while the
misbehaviour seems to be caused by the sqlite3 module adding commit
instructions into the command stream.

More precisely, this code here automatically inserts a commit in front
of all commands send to sqlite which are not select, update, delete,
insert, replace:

http://code.python.org/hg/branches/release2.6-maint/file/7fa70e059572/Modules/_sqlite/cursor.c#l571


So, as a stop gap measure, how do I pass isolation_level=None to
sqlite3.connect via SA?

Thanks!

Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLite: Rolling back DDL requests

2010-06-24 Thread Torsten Landschoff
I am still astonished about sqlite3 messing up transaction boundaries.
And it is even worse than I thought because it breaks savepoints
completely.

Have a look at this thread:
http://mail.python.org/pipermail/python-list/2010-March/1239395.html

Quote: Setting isolation_level=None is a must for anyone who want to do
any serious work with sqlite.

I tend to agree.

Hope this stops somebody from running into the same problem.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Retrieving new instance by identity key?

2010-06-19 Thread Torsten Landschoff
Hi Michael,

On Fri, 2010-06-18 at 11:32 -0400, Michael Bayer wrote:

  session.get_instance_by_key(key)
  
  to be in the API, but I am unable to find it.
  
  Any hints?
 
 technically query._get() does this, but in a public sense, for the moment, 
 you'd say
 
 session.query(key[0]).get(key[1])
 
 I say for the moment since the above makes some assumptions about the format 
 of the key itself, which is less than ideal.   We haven't built comprehensive 
 patterns out based on identity keys, those util methods are currently the 
 product of one particular user who really wanted to work that way.   

Thanks for your fast reply. I guess I'll go that route than for now. :)

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Retrieving new instance by identity key?

2010-06-18 Thread Torsten Landschoff
Hi *,

I wonder if it is possible and supported in SQLAlchemy to query an
instance by identity key.

Use case: I want a length operation to run in an extra process. That
means I will need an extra database connection in that process and have
to retrieve the objects I am working with again.

To implement this in a generic way, I want to ask sqlalchemy about the
identity key of the instance (using sqlalchemy.orm.util.identity_key). 

Using that key, I can retrieve the instance again in the new process.
But where is the method that can do that? I was expecting something like

session.get_instance_by_key(key)

to be in the API, but I am unable to find it.

Any hints?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Retrieving new instance by identity key?

2010-06-18 Thread Torsten Landschoff
On Fri, 2010-06-18 at 17:05 +0200, Torsten Landschoff wrote:

 Using that key, I can retrieve the instance again in the new process.
 But where is the method that can do that? I was expecting something like
 
 session.get_instance_by_key(key)
 
 to be in the API, but I am unable to find it.

Looking further, I found Query.key() but its arguments seem to be
incompatible with the result of util.identity_key. In fact, it
translates its input to match the latter and calls Query._get().

What am I missing?

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: alchemy-migrate with DeclarativeBase

2010-06-18 Thread Torsten Landschoff
Hi Shane,

On Thu, 2010-06-17 at 23:09 -0700, Shane wrote:

 def upgrade():
   try:
   session.begin() # Start transaction, but tables are always 
 committed
 (See below)
   
 DeclarativeBase.metadata.tables[Account.__tablename__].create(migrate_engine)

Hmm, that looks overly complicated to me. I am using the create method
on the Table as well to do this, but it works fine without a session
just using a plain connection:

engine = create_engine(...)
conn = engine.connect()
with conn.begin():
Account.create(conn)
# for testing...
conn.rollback()

I did not create an ORM session before updating the tables as the schema
might not match the mapped classes before updating.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Protection against changes outside of (explicit) transactions

2010-05-25 Thread Torsten Landschoff
A while ago I had a look at Clojure (http://www.clojure.org). What I
especially liked is the software transactional memory. I figured that
the same functionality should be doable with SQLAlchemy, although it
will probably be slower.

I am now working on an interactive application which keeps a lot of
objects in memory. Given that python does not offer protection of member
variables, I fear that a reference to a list escapes and is overriden
inadvertently.

In Clojure, changes to transactional memory are disallowed outside of a
transaction:

-
user= (def data (ref foo))
#'user/data
user= (deref data)
foo
user= (ref-set data bar)
java.lang.IllegalStateException: No transaction running
(NO_SOURCE_FILE:0)
user= (dosync (ref-set data bar))
bar
user= (deref data)
bar
-

I was expecting the same of SQLAlchemy with autocommit=True sessions.
However, the following code shows that this does not work:

-
import sqlalchemy as sa
import sqlalchemy.orm as orm

metadata = sa.MetaData()
class Data(object): pass
data_table = sa.Table(data, metadata,
sa.Column(name, sa.String, primary_key=True))
orm.mapper(Data, data_table)

engine = sa.create_engine(sqlite:///:memory:, echo=True)
metadata.create_all(engine)
maker = orm.sessionmaker(bind=engine, autocommit=True)
session = maker()

assert not session.is_active
d1 = Data()
d1.name = an_entry
session.add(d1)
# - I would expect an exception here as we are outside of any
# transaction!

with session.begin():
d2 = Data()
d2.name = another_entry
session.add(d2)

# This will fail as there are two entries now.
session.query(Data).one()
-


Is there a supported way to enforce explicitly starting a transaction
before adding or updating any persistent objects?


My ultimate goal is:
- have the working set of persistent objects in memory (read only by
default)
- when user requests a change, start a transaction and update the db
- by default, don't expire the in-memory objects (this is a local DB
using SQLite, usually only a single app will have access at a time)
- when the user hits f5 (File-Refresh), expire all persistent objects
and refresh the GUI

Greetings, Torsten


-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Büro Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registergericht: Mannheim, HRB: 109659, Sitz: Karlsruhe,
Geschäftsführer:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Protection against changes outside of (explicit) transactions

2010-05-25 Thread Torsten Landschoff
Hi Michael,

thanks for your fast reply. How do you keep up with all this email?

On Tue, 2010-05-25 at 10:08 -0400, Michael Bayer wrote:


 I suppose you could try subclassing Session or perhaps seeing if the
 hooks provided with SessionExtension (I'm thinking the attach hook)
 give you enough to go on here.   Or, probably easier, just build
 yourself a wrapper class that looks like Session but proxies requests
 to it, after passing through your checks for current transaction
 active.

The attach hook would suffice to catch the session.add of new objects.
However, this is not my core interest, as it is highly unlikely that
somebody calls this by accident.

My goal is to catch property writes on mapped attributes. So I will
probably need an AttributeExtension, but I don't know how to access the
session state from there.

 It should go without saying that Clojure is from an entirely different
 philosophy than that of Python and SQLAlchemy (its author is hostile
 to the entire concept of object oriented design), so imitating
 particular behaviors of Clojure is not a core goal of the project.

:-) I don't know from what you infer that Rich is hostile to OO design.
Of course I don't ask from SQLAlchemy to imitate Clojure. It's just that
I find Clojures behaviour in this single context more intuitive.
I thought it would be a neat feature to support this kind of protection
in SQLAlchemy. Your mileage may vary.

Greetings, Torsten



-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Mapping dictionaries with string keys and record values

2010-04-26 Thread Torsten Landschoff
Hi Michael.

On Sun, 2010-04-25 at 22:32 -0400, Michael Bayer wrote:

   You should be able to roll this yourself as a custom dictlike
   collection class . 

That's what I did :-) I wrote mostly to share my code and to suggest
that this should be available out of the box.

  sorry, it seems you've done that already.  I'm surprised the
  existing attribute_mapped_collection doesn't do this already ?

I was surprised as well.

 well, it doesn't.  It should, and maybe should raise an error if the
 two values don't match.  It doesn't make much sense to add another
 collection class to do what the first one should.

Agreed.

 I'm sure you noticed that you can remove the repetition by working the
 other way, i.e.:
 
 item.note.set(Note(key=color, value=blue, desc=This should be
 blue because))
 item.note.set(Note(key=shape, value=rectangular))

Indeed, although I find it unintuitive that set actually does an
append/insert/add operation. Apart from that, I try to keep the mapping
code and the data objects (which carry application logic) separate in
case we ever want to use a non-SQL data backend (NoSQL DB, ZODB, ...).
So usually, there is no set method on item.notes and there is no key
property in the Note class (it's added by the mapper as _key).

 also, never do this:

 
 class Note(object):
 def __init__(self, **args): 
 self.__dict__.update(args)

That's not production code. I just added it to get initialization in one
line.

 you bypass all of SQLAlchemy's history tracking logic.  do this
 instead:
 
 
 class Note(object):
 def __init__(self, **args): 
 for k, v in args.items():
 setattr(self, k, v)

I usually name the possible keyword arguments explicitly as the output
of help(Note) is otherwise quite unhelpful.

Thanks for your reply and the remarks! Do you plan to extend
attribute_mapped_collection to update the key like in my example?

Be aware that this approach also has it's downside. For example,
assigning 

item2 = Item()
item2.notes['color'] = item.notes['color']

will not work contrary to what one might expect (it generated a conflict
on the id column). The solution is to have the target collection copy
the assigned object if it has a key already. But that might be too much
magic to be easy to understand.

So I think, the automatic key update should be an option.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Mapping dictionaries with string keys and record values

2010-04-26 Thread Torsten Landschoff
On Mon, 2010-04-26 at 11:24 -0400, Michael Bayer wrote:

 2. I wonder if there's a way to make this happen more deeply than within
 setattr().   Like the collection internals would include an event to
 operate upon the target object that includes the other args from the
 collection decorator.

Sorry, I was unable to follow you here. You think there should be an
event generated to be handled by the collection internals? What
arguments of the decorator are you talking about?

 3. corresponding delete() event.  Set the attribute on the replaced object
 to None ?   that seems potentially surprising ?

In my case it did not matter much since the object is owned by the
mapped collection. So after the delete, it is garbage collected anyway.
As the client code does not know about the key attribute, it would not
notice the change anyway even if it still has a pointer.

For me this is the same as some code still having a reference to a value
that was stored in a dict. If you remove it from the dictionary, you may
still keep a reference but it is not reachable anymore from the dict. To
make this match with the database, I am using cascade=delete-orphan.

  item2 = Item()
  item2.notes['color'] = item.notes['color']
 
  will not work contrary to what one might expect (it generated a conflict
  on the id column). The solution is to have the target collection copy
  the assigned object if it has a key already. But that might be too much
  magic to be easy to understand.
 
 the target can only be in one collection at a time so I would think
 item2's id wins.  its a primary key switch.

Can't it belong to two collections if there is a secondary join?
In plain Python code, both mappings would have a reference to the same
object.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Mapping dictionaries with string keys and record values

2010-04-26 Thread Torsten Landschoff
On Mon, 2010-04-26 at 11:08 -0700, jason kirtland wrote:

  1. I'm not sure why it wasn't that way already, and I'd want to hear from
  Jason Kirtland, its author, on if we are missing something or otherwise
  whats up.   I have a vague notion that there was a reason for this, or
  maybe not.
 
 It's not something that can be easily solved in the general case with
 the current API. The mapped collections use a 'keyfunc' to figure out
 the dictionary key for loaded instances, for example
 'operator.attrgetter(name)' for attribute_mapped_collection(name).
  Mechanically reversing that logic in a setting operation sounds
 pretty hard to me, but perhaps if we allowed an 'assignfunc' function
 to be supplied that would do the trick.  Internally, the collection

I agree. The idea of an assignfunc crossed my mind as well.

Greetings, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Mapping dictionaries with string keys and record values

2010-04-25 Thread Torsten Landschoff
Hi everybody.

After reading the documentation on dictionary based collections at
http://www.sqlalchemy.org/docs/mappers.html#dictionary-based-collections, I 
am wondering if I am the only one who things that this code is intuitive:

item = Item()
item.notes['color'] = Note('color', 'blue')
print item.notes['color']

I'd rather write

item.notes['color'] = Note('blue')

That the key is stored  with the value should be an implementation
detail I think.
I extended sqlalchemy.orm.collections.MappedCollection with a few lines
to implement this (attached).

Shouldn't something like this be included with SQLAlchemy? Or is this a
bad idea?

Greetings, Torsten


-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

import operator
import sqlalchemy as sa
import sqlalchemy.orm as orm
from sqlalchemy.orm.collections import MappedCollection

class MyMappedCollection(MappedCollection):
def __init__(self, key_attr):
super(MyMappedCollection, self).__init__(operator.attrgetter(key_attr))
self._key_attr = key_attr

def __setitem__(self, key, value):
setattr(value, self._key_attr, key)
if key in self:
del self[key]
super(MappedCollection, self).__setitem__(key, value)

def map_on(attr):
return lambda: MyMappedCollection(attr)

metadata = sa.MetaData()
item_table = sa.Table(item, metadata,
sa.Column(id, sa.Integer, primary_key=True))

note_table = sa.Table(note, metadata,
sa.Column(id, sa.Integer, sa.ForeignKey(item_table.c.id), primary_key=True, index=True),
sa.Column(key, sa.String, primary_key=True),
sa.Column(value, sa.String),
sa.Column(desc, sa.String))

class Item(object): pass
class Note(object):
def __init__(self, **args): self.__dict__.update(args)

orm.mapper(Note, note_table)
orm.mapper(Item, item_table, properties=dict(
note=orm.relation(Note, cascade=all,delete-orphan, collection_class=map_on(key

engine = sa.create_engine(sqlite:///)
metadata.create_all(engine)
session = orm.sessionmaker(bind=engine)()

item = Item()
item.note[color] = Note(value=blue, desc=This should be blue because)
item.note[shape] = Note(value=rectangular)
session.add(item)
session.commit()




[sqlalchemy] Unifying large objects on commit/flush

2010-03-26 Thread Torsten Landschoff
Hi there!

We are considering to use SQLAlchemy for a new project. Our first tests
look promising and it is a fun to use SA. But I still have a problem to
implement some special features we would like to have.

For example, I want to store large (tens to hundreds of MB) objects into
the database but keep the option open to store them into the filesystem
later. I would prefer storing them into the DB to have them under
transaction protection but OTOH I know that this can become a
performance problem. So I want to build an interface to allow external
(file or extra DB) storage later.

So instead of the blob itself I want to store a cryptographic hash (like
git, Mercurial, Fossil SCM etc. do) and index the real data from that.
If somebody tries to import the same file twice, it should just reuse
the existing blob (and possibly sanity check if the content matches).

The following example is a greatly simplified example of that approach.
It works like this, but I would like to do without the exception handler
at the end ;-)

--
import sqlalchemy as sa
import sqlalchemy.orm as orm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
import hashlib

Base = declarative_base()

class CommonStorage(Base):
__tablename__ = common
hashval = sa.Column(sa.String, primary_key=True)
value   = sa.Column(sa.LargeBinary)
def __init__(self, v):
self.value = v
self.hashval = hashlib.md5(v).hexdigest()

class StorageUser(Base):
__tablename__ = user
id= sa.Column(sa.Integer, primary_key=True)
ref   = sa.Column(None, sa.ForeignKey(CommonStorage.hashval))
rel   = orm.relation(CommonStorage)
value = association_proxy(rel, value)

engine = sa.create_engine(sqlite:///, echo=True)
Base.metadata.create_all(engine)
session = orm.sessionmaker(bind=engine, autoflush=False)()

ua, ub = StorageUser(), StorageUser()
ua.value = ub.value = Something
session.add(ua)
session.commit()
session.add(ub)
try:
session.commit()
except sa.exc.FlushError:
# I really like this error handling - we have to rollback first to
# acknowledge the error. Cool! Never ignore errors and carry on again!
session.rollback()
ub.rel = session.query(CommonStorage).filter(ub.rel.hashval == 
CommonStorage.hashval).first()
session.add(ub)
session.commit()
---

I tried using a MapperExtension on the StorageUser and to replace the
ref inside before_insert and before_update by searching for a matching
hash:

-
class StorageExtension(interfaces.MapperExtension):
def before_insert(self, mapper, connection, instance):
if instance.rel != None:
sess = orm.object_session(instance)
existing = sess.query(CommonStorage).filter(instance.rel.hashval == 
CommonStorage.hashval).first()
if existing != None:
instance.rel = existing
before_update = before_insert


 class StorageUser(Base):
 __tablename__ = user
+__mapper_args__ = dict(extension=StorageExtension())
-

While the extension gets called and tries to replace the relation, SA
still tries to insert the new entry.


Any way to get this implemented?

Greetings and thanks for any hint, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registergericht: Mannheim, HRB: 109659, Sitz: Karlsruhe,
Geschäftsführer:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Unifying large objects on commit/flush

2010-03-26 Thread Torsten Landschoff
Hi Michael,

On Fri, 2010-03-26 at 14:30 -0400, Michael Bayer wrote:

 here's the relevant bit of documentation:
 
 http://www.sqlalchemy.org/docs/reference/orm/interfaces.html?highlight=mapperextension#sqlalchemy.orm.interfaces.MapperExtension.before_insert
 
 Column-based attributes can be modified within this method which will
 result in the new value being inserted. However **no** changes to the
 overall flush plan can be made, and manipulation of the Session  will not
 have the desired effect. To manipulate the Session  within an extension,
 use SessionExtension.

Thanks for the pointer. I read that part before but was not sure if I
have to modify the flush plan.

I attached the modified source code that actually works.

I dislike this solution for the following reasons:

* The extension scans through all new instances which could be quite a
number.
* The session must be modified (okay, no real problem).
* In case multiple classes use the CommonStorage class, the
StorageExtension must be adjusted. It would be better to operate on
CommonStorage instances but I don't know how to find the related classes
before the whole thing goes to the database.


Another question: Any idea when the second SA book will be published? I
bought the Essential SA book but it is a bit outdated covering 0.4.x.

Thanks, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

import sqlalchemy as sa
import sqlalchemy.orm as orm
import sqlalchemy.orm.interfaces as interfaces
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
import hashlib

Base = declarative_base()

class CommonStorage(Base):
__tablename__ = common
hashval = sa.Column(sa.String, primary_key=True)
value = sa.Column(sa.LargeBinary)
def __init__(self, v):
self.value = v
self.hashval = hashlib.md5(v).hexdigest()
def joinFrom(self, session):
copy = session.query(CommonStorage).filter(self.hashval==CommonStorage.hashval).first()
return copy or self

class StorageUser(Base):
__tablename__ = user
id = sa.Column(sa.Integer, primary_key=True)
ref = sa.Column(None, sa.ForeignKey(CommonStorage.hashval))
rel = orm.relation(CommonStorage)
value = association_proxy(rel, value)

class StorageExtension(interfaces.SessionExtension):
def before_flush(self, session, flush_context, instances=None):
for d in session.new:
if isinstance(d, StorageUser) and d.rel is not None:
original = d.rel
d.rel = d.rel.joinFrom(session)
if original is not d.rel:
session.expunge(original)

engine = sa.create_engine(sqlite:///, echo=True)
Base.metadata.create_all(engine)
session = orm.sessionmaker(bind=engine, extension=StorageExtension(), autoflush=False)()

ua, ub = StorageUser(), StorageUser()
ua.value = ub.value = Something
session.add(ua)
session.commit()
session.add(ub)
session.commit()