[sqlalchemy] Re: BakedQuery with function parameters

2016-10-24 Thread Anton Baranenko
Thank you Mike!

This is indeed quite complex, I will try this and will write back on the 
results. Worst case I have a plan B which is to save the raw SQL, execute 
it without ORM and then hard code the mapping manually - luckily while the 
query structure is quite complex, the resulting entity is relatively simple.

Thanks again, 
Anton

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] BakedQuery with function parameters

2016-10-23 Thread Anton Baranenko
Hi all,

based on the successful progress with my previous topic "Using BakedQuery 
with joined subqueries" (thanks again Mike!) I have the next question.

For some of my queries conditions logic is generated based on certain 
parameters. For example, I have a JSONB field that contains custom fields 
from the user. I have then "saved searches" defined by users to search on 
the custom field.
Now based on the "saved search ID" my function will generate a query.

The logic roughly is:

def generate_saved_search_query(saved_search_id):
   conditions = self.get_saved_search_conditions(saved_search_id)
   
   for condition in conditions:
   query = query.filter(condition)   

the 'generate_saved_search_query' function is idempotent for any parameter 
value. Searching for any particular saved_search_id occurs frequently => it 
makes sense to cache the query.

The question is - how can I used BakedQuery with this setup? From the 
source I can guess I should use add_criteria / with_criteria, but in the 
unit tests / documentation I could not find an example of doing so with the 
function parameter values.

Thank you for any help,
Anton

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Using BakedQuery with joined subqueries

2016-10-23 Thread Anton Baranenko

Thank you Mike, worked as expected!




>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Using BakedQuery with joined subqueries

2016-10-22 Thread Anton Baranenko
Hi all,

I'd like to ask for your help with implementing the following.

I have quite a complex query with subqueries generated in the application. 
In general the structure would look like this:

draft_data = aliased(
Draft.query.filter(
Draft.author_user_id == drafts_from_user_id  # draft_from_user_id is a 
bind parameter
).subquery(),
name='draft_data'
)


then a bunch of similar subqueries that get integrated into the main query 
like this:

query = query.outerjoin(draft_data, WorkItemData.draft_id == draft_data.c.id)


The actual query has around six parametrized subqueries like this.

The resulting query is pretty fast, but it's compilation takes about 50% to 
100% time of its execution => I decided to go with the BakedQuery.

However from the BakedQuery documentation I could not figure out how I 
should use it with joins / subqueries. Should I BakedQuery subqueries 
separately and then somehow join them with the main BakedQuery (how?). 
Should I somehow do the BakedQuery only of the main query (what to do with 
subqueries then?).

Thank you for any help,
Anton

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Default column value before an object is commited

2015-05-07 Thread Anton


On Wednesday, May 6, 2015 at 8:03:54 PM UTC-7, Michael Bayer wrote:

  Well it's already weird that you're defining both default and 
 server_default at the same time on the Column.  I'm not actually sure 
 what effect that would have and am surprised it doesn't fail in some 
 way...or at least renders the server_default more or less unused.


I have server_default for alembic. Alembic does not pick defaults, but does 
pick up server_default where generates migration files for DB structure. 
This is useful if rows are inserted outside of ORM.
 


 If you're just defining defaults as constants I'd think you only need 
 default.arg and that's it.   

I may have either callables or scalars.

 if it might be callable, the Python builtin callable() will do that for 
 you, is_callable is just a shortcut to having to call that internally.
 Those are all non-underscored attributes so they are public and safe to use.

Got it.

Thanks, 
Anton.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Default column value before an object is commited

2015-05-06 Thread Anton
*The goal* 
I have an ORM model. 

class MyModel(Model):
__tablename__ = 'my_model'
col1 = db.Column(db.Boolean, default=False, server_default=false())
col2 = db.Column(db.Integer, default=100, server_default=null())



I want to be able to create an object and have col1 and col2 be populated 
with default values even before the object is committed or even added to a 
session.

For example:
 print(MyModel().col1)
False
 print(MyModel().col2)
100

I came across this answer by Mike Bayer: 
http://stackoverflow.com/a/14013090/925675 where he recommends using either 
__init__() or events.

I decided to use to an event based solution, but I did not want to 
duplicate default values in the listener function, so I trying to do it in 
the following way:

@event.listens_for(MyModel, 'init')
def init(target, args, kwargs):
target_type = type(target)
for attr_name in ('col1', 'col2'):
attr_default = getattr(target_type, attr_name).default
if attr_default is None:
continue
elif attr_default.is_callable:
kwargs.setdefault(attr_name, attr_default.arg())
elif attr_default.is_scalar:
kwargs.setdefault(attr_name, attr_default.arg)
else:
raise AttributeError(Unsupported default type of column {}.
format(attr_name))


So far it works for my need, except 2 issues:
1) It does not work with Sequential or Clause defaults, which I don't have 
need for anyway.
2) It completely relies on ColumnDefault internals and partially duplicates 
logic from DefaultExecutionContext._exec_default()

I am wondering if there is a better solution to it?

Thanks,
Anton.






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Choosing pattern for vertical table like mapping.

2014-09-10 Thread Anton
So I am trying to do the following:
class SimpleModel(object):
__table_args__ = {'extend_existing': True}

id = db.Column(db.Integer, primary_key=True)

class GroupEntityAttributes(SimpleModel, db.Model):
__table_args__ = {'extend_existing': True}
__tablename__ = 'group_entity'

#Table Columns
id = db.Column(db.Integer, primary_key=True)
group_id = db.Column(db.Integer, db.ForeignKey('groups.id'), nullable=
False)
entity_id = db.Column(db.Integer, db.ForeignKey('entities.id'), nullable
=False)
key = db.Column(db.String)
value = db.Column(db.Text)

#Relationships
entity = db.relationship('Entity', lazy='joined')
group = db.relationship('Group')

class GroupEntityAttributesPoly(GroupEntityAttributes):
__mapper_args__ = {'polymorphic_on': GroupEntityAttributes.key}

class GroupEntity(GroupEntityAttributesPoly):
__mapper_args__ = {'polymorphic_identity': '_has_connection'}

others = db.relationship(
'GroupEntityAttributes',
primaryjoin=and_(
remote(foreign(GroupEntityAttributes.group_id)) == 
GroupEntityAttributes.group_id,
remote(foreign(GroupEntityAttributes.entity_id)) == 
GroupEntityAttributes.entity_id,
)
)


When I try to load the model GroupEntity I get an error:
ArgumentError: Can't determine relationship direction for relationship '
GroupEntity.others' - foreign key columns within the join condition are 
present in both the parent and the child's mapped tables.  Ensure that only 
those columns referring to a parent column are marked as foreign, either 
via the foreign() annotation or via the foreign_keys argument.

I am not sure how to overcome this error. Would appreciate any help.

Best,
Anton.

On Tuesday, September 9, 2014 1:10:40 PM UTC-7, Anton wrote:

 Trying to use with multiple table mapping.

 On Tuesday, September 9, 2014 1:03:25 PM UTC-7, Anton wrote:

 Hi, 

 I need some help choosing the right pattern for the models I have. I have 
 the following tables:
 Entity (INT id, VARCHAR name)
 Group (INT id, VARCHAT name)
 GroupEntityAttributes(INT entity_id, INT group_id, VARCHAR key, VARCHAR 
 name)

 Entity and Group models are pretty straight forward. But 
 GroupEntityAttributes represents Entity within some group with custom 
 attributes added, so it resembles this pattern ( 
 https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master
  
 ), the only difference is that there is an additional foreign key.
 I wonder if there is any approach I can use to adapt vertical pattern for 
 my needs without modifying tables.

 Best,
 Anton.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Choosing pattern for vertical table like mapping.

2014-09-10 Thread Anton
Hi Mike,

I have two models that I did not provide in the example: Entity and Group.
Every group can assign some customer attributes to an entity, this is 
reflected in GroupEntityAttributes.
Let's say GroupEntityAttributes(group_id=1, entity_id=1, key=color, 
value=red) would mean that Group 1 assigned a customer attribute color 
with a value red to 
the entity with id 1. So when I want to load an entity for a group I want 
to load the entity object plus all the custom attributes assigned to the 
entity but the group.
Also I need some mechanism mark an entity as loadable for the group, for 
this I am using a record like 
GroupEntityAttributes(group_id=1, entity_id=1, key=_has_connection, 
value=Null), so *key* *_*has_connection** has a special meaning. If it 
presents, then the entity should be loaded for the group with all other 
custom attributes, otherwise no. For this purpose I created a model 
GroupEntity.

So this way GroupEntity.query.filter_by(group_id=1).all() will give a list 
with all entities loadable fir the Group 1. Every item of the list will 
have an association to Group object and Entity object, I want *others* 
property to represent a list of customer attributes, that I would like to 
implement as described here 
(https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master).
 
For this I need to have a self referencing relationship to 
GroupEntityAttributes, 
which appears to be self-referencing because two models use the same table.

SimpleModel is just a mixin that I use in every model I create to make sure 
that they all have a unique id.

Thanks,
Anton.


On Wednesday, September 10, 2014 3:00:39 PM UTC-7, Michael Bayer wrote:


 On Sep 10, 2014, at 5:11 PM, Anton ansch...@gmail.com javascript: 
 wrote:

 So I am trying to do the following:
 class SimpleModel(object):
 __table_args__ = {'extend_existing': True}

 id = db.Column(db.Integer, primary_key=True)

 class GroupEntityAttributes(SimpleModel, db.Model):
 __table_args__ = {'extend_existing': True}
 __tablename__ = 'group_entity'

 #Table Columns
 id = db.Column(db.Integer, primary_key=True)
 group_id = db.Column(db.Integer, db.ForeignKey('groups.id'), nullable=
 False)
 entity_id = db.Column(db.Integer, db.ForeignKey('entities.id'), 
 nullable=False)
 key = db.Column(db.String)
 value = db.Column(db.Text)

 #Relationships
 entity = db.relationship('Entity', lazy='joined')
 group = db.relationship('Group')

 class GroupEntityAttributesPoly(GroupEntityAttributes):
 __mapper_args__ = {'polymorphic_on': GroupEntityAttributes.key}

 class GroupEntity(GroupEntityAttributesPoly):
 __mapper_args__ = {'polymorphic_identity': '_has_connection'}

 others = db.relationship(
 'GroupEntityAttributes',
 primaryjoin=and_(
 remote(foreign(GroupEntityAttributes.group_id)) == 
 GroupEntityAttributes.group_id,
 remote(foreign(GroupEntityAttributes.entity_id)) == 
 GroupEntityAttributes.entity_id,
 )
 )



 What’s illustrated here doesn’t really make sense.  If 
 GroupEntityAttributes.group_id referred to “groups.id”, then a 
 relationship extending from GroupEntity would need to refer to whatever 
 class is mapped to the “entities” table, which I don’t see here.  It looks 
 like you’re taking foreign key columns on the “group_entity” table and 
 somehow pointing it to itself in that primaryjoin which doesn’t make sense. 
   GroupEntity would need to be mapped to either “entities” or “groups” in 
 order to refer to GroupEntityAttributes using that pattern and this sort of 
 looks like the association object pattern, see 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#association-object
  
 for an example of what that looks like.

 Other things that you probably don’t want:

 1. polymorphic_on on a subclass, that should always go on the mapped class 
 at the bottom of the hierarchy, see 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#joined-table-inheritance
  
 for basic examples

 2. extend_existing, this is usually only relevant when you’re using 
 reflection, otherwise it probably means you’re setting things up 
 incorrectly 










 When I try to load the model GroupEntity I get an error:
 ArgumentError: Can't determine relationship direction for relationship '
 GroupEntity.others' - foreign key columns within the join condition are 
 present in both the parent and the child's mapped tables.  Ensure that only 
 those columns referring to a parent column are marked as foreign, either 
 via the foreign() annotation or via the foreign_keys argument.

 I am not sure how to overcome this error. Would appreciate any help.

 Best,
 Anton.

 On Tuesday, September 9, 2014 1:10:40 PM UTC-7, Anton wrote:

 Trying to use with multiple table mapping.

 On Tuesday, September 9, 2014 1:03:25 PM UTC-7, Anton wrote:

 Hi, 

 I need some help

Re: [sqlalchemy] Choosing pattern for vertical table like mapping.

2014-09-10 Thread Anton
Mike, the tables are already there, I am trying to create models mapped on 
the tables so it is easier to work with.
Tables design looks like:

CREATE TABLE group (
id integer NOT NULL,
name character varying
);


CREATE TABLE entity (
id integer NOT NULL,
name character varying
);

CREATE TABLE group_entity (
id integer NOT NULL,
entity_id integer,
key character varying,
value text,
group_id integer NOT NULL
);

Best,
Anton.





On Wednesday, September 10, 2014 4:51:54 PM UTC-7, Michael Bayer wrote:

 hi Anton -

 if you can show me correct table designs, e.g. as simple CREATE TABLE 
 statements, that definitely do the thing you need, I can show you how to 
 map to it.  So far it’s not clear if you’re looking for schema design via 
 SQLAlchemy’s API.

 - mike


 On Sep 10, 2014, at 6:36 PM, Anton ansch...@gmail.com javascript: 
 wrote:

 Hi Mike,

 I have two models that I did not provide in the example: Entity and Group.
 Every group can assign some customer attributes to an entity, this is 
 reflected in GroupEntityAttributes.
 Let's say GroupEntityAttributes(group_id=1, entity_id=1, key=color, 
 value=red) would mean that Group 1 assigned a customer attribute color 
 with a value red to 
 the entity with id 1. So when I want to load an entity for a group I want 
 to load the entity object plus all the custom attributes assigned to the 
 entity but the group.
 Also I need some mechanism mark an entity as loadable for the group, for 
 this I am using a record like 
 GroupEntityAttributes(group_id=1, entity_id=1, key=_has_connection, 
 value=Null), so *key* *_*has_connection** has a special meaning. If it 
 presents, then the entity should be loaded for the group with all other 
 custom attributes, otherwise no. For this purpose I created a model 
 GroupEntity.

 So this way GroupEntity.query.filter_by(group_id=1).all() will give a list 
 with all entities loadable fir the Group 1. Every item of the list will 
 have an association to Group object and Entity object, I want *others* 
 property to represent a list of customer attributes, that I would like to 
 implement as described here (
 https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master).
  
 For this I need to have a self referencing relationship to 
 GroupEntityAttributes, 
 which appears to be self-referencing because two models use the same table.

 SimpleModel is just a mixin that I use in every model I create to make 
 sure that they all have a unique id.

 Thanks,
 Anton.


 On Wednesday, September 10, 2014 3:00:39 PM UTC-7, Michael Bayer wrote:


 On Sep 10, 2014, at 5:11 PM, Anton ansch...@gmail.com wrote:

 So I am trying to do the following:
 class SimpleModel(object):
 __table_args__ = {'extend_existing': True}

 id = db.Column(db.Integer, primary_key=True)

 class GroupEntityAttributes(SimpleModel, db.Model):
 __table_args__ = {'extend_existing': True}
 __tablename__ = 'group_entity'

 #Table Columns
 id = db.Column(db.Integer, primary_key=True)
 group_id = db.Column(db.Integer, db.ForeignKey('groups.id'), nullable
 =False)
 entity_id = db.Column(db.Integer, db.ForeignKey('entities.id'), 
 nullable=False)
 key = db.Column(db.String)
 value = db.Column(db.Text)

 #Relationships
 entity = db.relationship('Entity', lazy='joined')
 group = db.relationship('Group')

 class GroupEntityAttributesPoly(GroupEntityAttributes):
 __mapper_args__ = {'polymorphic_on': GroupEntityAttributes.key}

 class GroupEntity(GroupEntityAttributesPoly):
 __mapper_args__ = {'polymorphic_identity': '_has_connection'}

 others = db.relationship(
 'GroupEntityAttributes',
 primaryjoin=and_(
 remote(foreign(GroupEntityAttributes.group_id)) == 
 GroupEntityAttributes.group_id,
 remote(foreign(GroupEntityAttributes.entity_id)) == 
 GroupEntityAttributes.entity_id,
 )
 )



 What’s illustrated here doesn’t really make sense.  If 
 GroupEntityAttributes.group_id referred to “groups.id”, then a 
 relationship extending from GroupEntity would need to refer to whatever 
 class is mapped to the “entities” table, which I don’t see here.  It looks 
 like you’re taking foreign key columns on the “group_entity” table and 
 somehow pointing it to itself in that primaryjoin which doesn’t make sense. 
   GroupEntity would need to be mapped to either “entities” or “groups” in 
 order to refer to GroupEntityAttributes using that pattern and this sort of 
 looks like the association object pattern, see 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#association-object
  
 for an example of what that looks like.

 Other things that you probably don’t want:

 1. polymorphic_on on a subclass, that should always go on the mapped 
 class at the bottom of the hierarchy, see 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#joined-table

Re: [sqlalchemy] Choosing pattern for vertical table like mapping.

2014-09-10 Thread Anton
I already have models for group, entity and group_entity. Now I am trying 
to build a model that would represent an entity for a group with all the 
custom attributes.

On Wednesday, September 10, 2014 5:06:36 PM UTC-7, Anton wrote:

 Mike, the tables are already there, I am trying to create models mapped on 
 the tables so it is easier to work with.
 Tables design looks like:

 CREATE TABLE group (
 id integer NOT NULL,
 name character varying
 );


 CREATE TABLE entity (
 id integer NOT NULL,
 name character varying
 );

 CREATE TABLE group_entity (
 id integer NOT NULL,
 entity_id integer,
 key character varying,
 value text,
 group_id integer NOT NULL
 );

 Best,
 Anton.





 On Wednesday, September 10, 2014 4:51:54 PM UTC-7, Michael Bayer wrote:

 hi Anton -

 if you can show me correct table designs, e.g. as simple CREATE TABLE 
 statements, that definitely do the thing you need, I can show you how to 
 map to it.  So far it’s not clear if you’re looking for schema design via 
 SQLAlchemy’s API.

 - mike


 On Sep 10, 2014, at 6:36 PM, Anton ansch...@gmail.com wrote:

 Hi Mike,

 I have two models that I did not provide in the example: Entity and Group.
 Every group can assign some customer attributes to an entity, this is 
 reflected in GroupEntityAttributes.
 Let's say GroupEntityAttributes(group_id=1, entity_id=1, key=color, 
 value=red) would mean that Group 1 assigned a customer attribute color 
 with a value red to 
 the entity with id 1. So when I want to load an entity for a group I want 
 to load the entity object plus all the custom attributes assigned to the 
 entity but the group.
 Also I need some mechanism mark an entity as loadable for the group, for 
 this I am using a record like 
 GroupEntityAttributes(group_id=1, entity_id=1, key=_has_connection, 
 value=Null), so *key* *_*has_connection** has a special meaning. If it 
 presents, then the entity should be loaded for the group with all other 
 custom attributes, otherwise no. For this purpose I created a model 
 GroupEntity.

 So this way GroupEntity.query.filter_by(group_id=1).all() will give a 
 list with all entities loadable fir the Group 1. Every item of the list 
 will have an association to Group object and Entity object, I want 
 *others* property to represent a list of customer attributes, that I 
 would like to implement as described here (
 https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master).
  
 For this I need to have a self referencing relationship to 
 GroupEntityAttributes, 
 which appears to be self-referencing because two models use the same table.

 SimpleModel is just a mixin that I use in every model I create to make 
 sure that they all have a unique id.

 Thanks,
 Anton.


 On Wednesday, September 10, 2014 3:00:39 PM UTC-7, Michael Bayer wrote:


 On Sep 10, 2014, at 5:11 PM, Anton ansch...@gmail.com wrote:

 So I am trying to do the following:
 class SimpleModel(object):
 __table_args__ = {'extend_existing': True}

 id = db.Column(db.Integer, primary_key=True)

 class GroupEntityAttributes(SimpleModel, db.Model):
 __table_args__ = {'extend_existing': True}
 __tablename__ = 'group_entity'

 #Table Columns
 id = db.Column(db.Integer, primary_key=True)
 group_id = db.Column(db.Integer, db.ForeignKey('groups.id'), 
 nullable=False)
 entity_id = db.Column(db.Integer, db.ForeignKey('entities.id'), 
 nullable=False)
 key = db.Column(db.String)
 value = db.Column(db.Text)

 #Relationships
 entity = db.relationship('Entity', lazy='joined')
 group = db.relationship('Group')

 class GroupEntityAttributesPoly(GroupEntityAttributes):
 __mapper_args__ = {'polymorphic_on': GroupEntityAttributes.key}

 class GroupEntity(GroupEntityAttributesPoly):
 __mapper_args__ = {'polymorphic_identity': '_has_connection'}

 others = db.relationship(
 'GroupEntityAttributes',
 primaryjoin=and_(
 remote(foreign(GroupEntityAttributes.group_id)) == 
 GroupEntityAttributes.group_id,
 remote(foreign(GroupEntityAttributes.entity_id)) == 
 GroupEntityAttributes.entity_id,
 )
 )



 What’s illustrated here doesn’t really make sense.  If 
 GroupEntityAttributes.group_id referred to “groups.id”, then a 
 relationship extending from GroupEntity would need to refer to whatever 
 class is mapped to the “entities” table, which I don’t see here.  It looks 
 like you’re taking foreign key columns on the “group_entity” table and 
 somehow pointing it to itself in that primaryjoin which doesn’t make sense. 
   GroupEntity would need to be mapped to either “entities” or “groups” in 
 order to refer to GroupEntityAttributes using that pattern and this sort of 
 looks like the association object pattern, see 
 http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#association-object
  
 for an example of what that looks like

[sqlalchemy] Choosing pattern for vertical table like mapping.

2014-09-09 Thread Anton
Hi, 

I need some help choosing the right pattern for the models I have. I have 
the following tables:
Entity (INT id, VARCHAR name)
Group (INT id, VARCHAT name)
GroupEntityAttributes(INT entity_id, INT group_id, VARCHAR key, VARCHAR 
name)

Entity and Group models are pretty straight forward. But 
GroupEntityAttributes represents Entity within some group with custom 
attributes added, so it resembles this pattern ( 
https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master
 
), the only difference is that there is an additional foreign key.
I wonder if there is any approach I can use to adapt vertical pattern for 
my needs without modifying tables.

Best,
Anton.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Choosing pattern for vertical table like mapping.

2014-09-09 Thread Anton
Trying to use with multiple table mapping.

On Tuesday, September 9, 2014 1:03:25 PM UTC-7, Anton wrote:

 Hi, 

 I need some help choosing the right pattern for the models I have. I have 
 the following tables:
 Entity (INT id, VARCHAR name)
 Group (INT id, VARCHAT name)
 GroupEntityAttributes(INT entity_id, INT group_id, VARCHAR key, VARCHAR 
 name)

 Entity and Group models are pretty straight forward. But 
 GroupEntityAttributes represents Entity within some group with custom 
 attributes added, so it resembles this pattern ( 
 https://bitbucket.org/zzzeek/sqlalchemy/src/374173e89d4e21a75bfabd8a655d17c247b6f1fc/examples/vertical/dictlike.py?at=master
  
 ), the only difference is that there is an additional foreign key.
 I wonder if there is any approach I can use to adapt vertical pattern for 
 my needs without modifying tables.

 Best,
 Anton.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Speed up bulk inserts

2014-08-18 Thread Anton
Hi Michael,

I checked your post on the stackoverflow, could you help me figure 
something out regarding usage of Core.
In the example you provided you have a version with test_sqlite3 which uses 
customer user supplied query so it issues only one insert statement with 
all rows in values clause. I wonder if this can be achieved by using Core. 
I tried this approach:
session.connection().execute(
mytable.insert(),
raw_list
)
where raw_list is a list of dictionaries to insert and this issues a 
separate statement for every row, which is probably expected. But then I 
came across this docs page 
http://docs.sqlalchemy.org/en/latest/core/tutorial.html#executing-multiple-statements
 
in the example with a list of dictionaries there is a single insert 
statement issues with multi-row values clause. I wonder if I can achieve 
the same behavior. I am using PostgreSQL 9.3.4 and SQLAlchemy==0.9.4
Thanks!

Best,
Anton. 

On Wednesday, November 6, 2013 12:58:53 PM UTC-8, Michael Bayer wrote:

 I wrote a full post regarding this topic on stackoverflow at  
 http://stackoverflow.com/questions/11769366/why-is-sqlalchemy-insert-with-sqlite-25-times-slower-than-using-sqlite3-directly/11769768#11769768
  
 .  If you start with this, I can answer more specific questions. 


 On Nov 6, 2013, at 10:28 AM, Achim Domma do...@procoders.net 
 javascript: wrote: 

  Hi, 
  
  I want to speed up my SqlAlchemy bulk inserting code and yes, I'm aware 
 that this is not the main purpose of SqlAlchemy and all databases have 
 faster low level import tools. 
  
  The background is the following: We import data from various sources and 
 apply various mappings. Currently I'm willing to trade raw runtime for much 
 simpler code, which is much easier to maintain. But I still want my code to 
 run as fast as it's possible with those assumptions. 
  
  There are two scenarios which I want to optimize: 
  
  1. Flat inserts without relations, but with unique ids generated inside 
 the database: In that case, SqlAlchemy retrieves the unique ids from the 
 database, but those ids are never used in my import process. I thought 
 about generating an insert statement out of an object. Obviously SqlAlchemy 
 has to do that too, so there might be some existing tool for that? The 
 other option would be, to tell SqlAlchemy to ignore the generated id an to 
 not retrieve it from the database. Is that possible? 
  
  2. Inserts using relations and unique ids generated in the database: I 
 think SqlAlchemy is already quite good at optimizing this and sending 
 objects as batches. If there are any tweeks and tricks to speed up inserts 
 having relations, I would be happy to hear them. 
  
  cheers, 
  Achim 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/groups/opt_out. 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Reuse pre-defined Enum?

2014-07-03 Thread Anton
Hey guys,

I am facing the same issue, I am using PostgreSQL and want to use native 
ENUM type in two tables. For migration I am going to use alembic. 
Did you guys find out any was to do this?

Thanks,
Anton

On Thursday, May 1, 2014 10:33:21 AM UTC-7, Michael Bayer wrote:

 OK.  ENUM is something I’d have to dedicate several days of attention on :(


 On May 1, 2014, at 1:32 PM, Vlad Wing vlad...@gmail.com javascript: 
 wrote:

 Yes, that's exactly what happend. I specified create_type=False and it 
 was ignored. Alembic tried to create the type anyway and, of course, it 
 failed.

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] [sqlamp] version 0.6 is out

2012-01-13 Thread anton
Hi all,

I'm happy to announce the availability of sqlamp 0.6. This release
introduces the support of moving nodes and adds compatibility with
python 3.2 and SQLAlchemy 0.7.x. See full changelog at
http://sqlamp.angri.ru/

sqlamp is an implementation of materialized path for SQLAlchemy. It is
licensed under BSD and have no external dependencies except of
SQLAlchemy.

Download: http://sqlamp.angri.ru/sqlamp-0.6.tar.gz

-- 
angri

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Raising exceptions from TypeDecorator.process_bind_param()

2011-06-26 Thread Anton
Michael,

On 22 июн, 17:48, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 22, 2011, at 3:41 AM, Anton wrote:

  On 22 июн, 01:31, Michael Bayer mike...@zzzcomputing.com wrote:
  On Jun 21, 2011, at 7:04 PM, Anton wrote:

  On 22 июн, 00:47, Michael Bayer mike...@zzzcomputing.com wrote:
  I added StatementError after having too often a custom type or other 
  kind of error happen deep inside the preparation for execution with no 
  indication what statement or parameter caused the issue.    It's an 
  enhancement, and the original exception is associated with the new one 
  as orig.   In Python 3 this works even more nicely as you can see 
  we're doing raise x from e, and you get both stacktraces.    What 
  about the StatementError is not working for you ?   I'd advise against 
  doing things like business-level validations in types.

  In my case it's more like sanity check than business-logic. And the
  reason to do this checks in type is that I want to delay it as much as
  possible. It would be acceptable to have another exception instead of
  custom, but the problem is that it is not easy to understand, what's
  went wrong looking at the stack trace. It doesn't show at the last
  line its original exception, only StatementError: 'query
  text' [params list].

  well Python 3 fixes this problem entirely by allowing exception chains.  
  What text would you prefer in StatementError ?

  Ideally it would work like it's used to in 0.5 and 0.6. Maybe, we may
  do it like this:

  +        if isinstance(e, exc.SQLAlchemyError):
  +            # don't suppress or reraise already handled exceptions
  +            return

 Well, actually I'd like every exception that happens in an execution to 
 include information about the statement, regardless of if a SQLAlchemy 
 construct raised it or not. The very common use case is when a script is 
 emitting hundreds of statements through a flush, then way deep inside, 
 there's one bound value that blows up.     A stack trace like that leads up 
 to a mapper and the unit of work but otherwise gives no context at all about 
 where this offending value might be.    

 It's unfortunate we have to wait for Python 3 for chained exceptions, but the 
 practice of reraising an exception as another one, including context, which 
 then links to the original as the cause, is an extremely common and useful 
 practice - it's why Python 3 added it.

 Its unfortunate we disagree on this so I've added a mixin for you in 
 r876ac91fd699 called DontWrapMixin.   StatementError also displays the 
 original exception class.

Unfortunately this approach doesn't work with python 2.4 which doesn't
allow multiple inherritance for exception classes, where one of parent
classes is new-style: TypeError: exceptions must be classes,
instances, or strings (deprecated), not PathOverflowError. We may
avoid such problem if we make DontWrapMixin derive from Exception
(which is old-style in 2.4 and new-style in 2.5+).

--
Anton

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Raising exceptions from TypeDecorator.process_bind_param()

2011-06-22 Thread Anton
On 22 июн, 01:31, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 21, 2011, at 7:04 PM, Anton wrote:

  On 22 июн, 00:47, Michael Bayer mike...@zzzcomputing.com wrote:
  I added StatementError after having too often a custom type or other kind 
  of error happen deep inside the preparation for execution with no 
  indication what statement or parameter caused the issue.    It's an 
  enhancement, and the original exception is associated with the new one as 
  orig.   In Python 3 this works even more nicely as you can see we're 
  doing raise x from e, and you get both stacktraces.    What about the 
  StatementError is not working for you ?   I'd advise against doing things 
  like business-level validations in types.

  In my case it's more like sanity check than business-logic. And the
  reason to do this checks in type is that I want to delay it as much as
  possible. It would be acceptable to have another exception instead of
  custom, but the problem is that it is not easy to understand, what's
  went wrong looking at the stack trace. It doesn't show at the last
  line its original exception, only StatementError: 'query
  text' [params list].

 well Python 3 fixes this problem entirely by allowing exception chains.  What 
 text would you prefer in StatementError ?

Ideally it would work like it's used to in 0.5 and 0.6. Maybe, we may
do it like this:

diff -r 8f070bce386b lib/sqlalchemy/engine/base.py
--- a/lib/sqlalchemy/engine/base.py Tue Jun 21 18:34:14 2011 -0400
+++ b/lib/sqlalchemy/engine/base.py Wed Jun 22 09:39:05 2011 +0200
@@ -1689,6 +1689,10 @@
 parameters,
 cursor,
 context):
+if isinstance(e, exc.SQLAlchemyError):
+# don't suppress or reraise already handled exceptions
+return
+
 if getattr(self, '_reentrant_error', False):
 # Py3K
 #raise exc.DBAPIError.instance(statement, parameters, e,


I then just derive my custom exception classes from
sqlalchemy.exc.SQLAlchemyError and everyone is happy. What do you
think?

--
Anton



  --
  Anton

  On Jun 21, 2011, at 6:39 PM, an...@angri.ru wrote:

  Hi,

  I'm porting sqlamp [1] to SQLAlchemy 0.7 and have some difficulties
  with it. One of them is raising an exception (non-DBAPI) from
  process_bind_param() method of classes derived from TypeDecorator. In
  0.6 [2] method _handle_dbapi_exception() of engine.base.Connectable
  doesn't munch the exception if it is not an instance of
  self.dialect.dbapi.Error. 0.7 in turn does [3]. And it reraises the
  exception of different type - instead of the original one it uses
  DBAPIError.instance(), which in my case returns StatementError. How
  can I fix it on my side?

  Thanks.

  --
  Anton

  [1]http://sqlamp.angri.ru
  [2]http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?...
  [3]http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?...

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Raising exceptions from TypeDecorator.process_bind_param()

2011-06-22 Thread Anton

On 22 июн, 17:48, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 22, 2011, at 3:41 AM, Anton wrote:

  On 22 июн, 01:31, Michael Bayer mike...@zzzcomputing.com wrote:
  On Jun 21, 2011, at 7:04 PM, Anton wrote:

  On 22 июн, 00:47, Michael Bayer mike...@zzzcomputing.com wrote:
  I added StatementError after having too often a custom type or other 
  kind of error happen deep inside the preparation for execution with no 
  indication what statement or parameter caused the issue.    It's an 
  enhancement, and the original exception is associated with the new one 
  as orig.   In Python 3 this works even more nicely as you can see 
  we're doing raise x from e, and you get both stacktraces.    What 
  about the StatementError is not working for you ?   I'd advise against 
  doing things like business-level validations in types.

  In my case it's more like sanity check than business-logic. And the
  reason to do this checks in type is that I want to delay it as much as
  possible. It would be acceptable to have another exception instead of
  custom, but the problem is that it is not easy to understand, what's
  went wrong looking at the stack trace. It doesn't show at the last
  line its original exception, only StatementError: 'query
  text' [params list].

  well Python 3 fixes this problem entirely by allowing exception chains.  
  What text would you prefer in StatementError ?

  Ideally it would work like it's used to in 0.5 and 0.6. Maybe, we may
  do it like this:

  +        if isinstance(e, exc.SQLAlchemyError):
  +            # don't suppress or reraise already handled exceptions
  +            return

 Well, actually I'd like every exception that happens in an execution to 
 include information about the statement, regardless of if a SQLAlchemy 
 construct raised it or not. The very common use case is when a script is 
 emitting hundreds of statements through a flush, then way deep inside, 
 there's one bound value that blows up.     A stack trace like that leads up 
 to a mapper and the unit of work but otherwise gives no context at all about 
 where this offending value might be.    

 It's unfortunate we have to wait for Python 3 for chained exceptions, but the 
 practice of reraising an exception as another one, including context, which 
 then links to the original as the cause, is an extremely common and useful 
 practice - it's why Python 3 added it.

 Its unfortunate we disagree on this so I've added a mixin for you in 
 r876ac91fd699 called DontWrapMixin.   StatementError also displays the 
 original exception class.

Excellent, thank you!

--
Anton

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Raising exceptions from TypeDecorator.process_bind_param()

2011-06-21 Thread anton
Hi,

I'm porting sqlamp [1] to SQLAlchemy 0.7 and have some difficulties
with it. One of them is raising an exception (non-DBAPI) from
process_bind_param() method of classes derived from TypeDecorator. In
0.6 [2] method _handle_dbapi_exception() of engine.base.Connectable
doesn't munch the exception if it is not an instance of
self.dialect.dbapi.Error. 0.7 in turn does [3]. And it reraises the
exception of different type - instead of the original one it uses
DBAPIError.instance(), which in my case returns StatementError. How
can I fix it on my side?

Thanks.

--
Anton

[1] http://sqlamp.angri.ru
[2] 
http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?rev=7667%3A6bf675d91a56#L1321
[3] 
http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?rev=7665%3A145c143cef74#L1685

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Raising exceptions from TypeDecorator.process_bind_param()

2011-06-21 Thread Anton
On 22 июн, 00:47, Michael Bayer mike...@zzzcomputing.com wrote:
 I added StatementError after having too often a custom type or other kind of 
 error happen deep inside the preparation for execution with no indication 
 what statement or parameter caused the issue.    It's an enhancement, and the 
 original exception is associated with the new one as orig.   In Python 3 
 this works even more nicely as you can see we're doing raise x from e, and 
 you get both stacktraces.    What about the StatementError is not working for 
 you ?   I'd advise against doing things like business-level validations in 
 types.

In my case it's more like sanity check than business-logic. And the
reason to do this checks in type is that I want to delay it as much as
possible. It would be acceptable to have another exception instead of
custom, but the problem is that it is not easy to understand, what's
went wrong looking at the stack trace. It doesn't show at the last
line its original exception, only StatementError: 'query
text' [params list]. It seems confusing, because there are no
statement error indeed. The stack trace itself is right though (ends
with my raise TooManyChildrenError() line).

--
Anton


 On Jun 21, 2011, at 6:39 PM, an...@angri.ru wrote:







  Hi,

  I'm porting sqlamp [1] to SQLAlchemy 0.7 and have some difficulties
  with it. One of them is raising an exception (non-DBAPI) from
  process_bind_param() method of classes derived from TypeDecorator. In
  0.6 [2] method _handle_dbapi_exception() of engine.base.Connectable
  doesn't munch the exception if it is not an instance of
  self.dialect.dbapi.Error. 0.7 in turn does [3]. And it reraises the
  exception of different type - instead of the original one it uses
  DBAPIError.instance(), which in my case returns StatementError. How
  can I fix it on my side?

  Thanks.

  --
  Anton

  [1]http://sqlamp.angri.ru
  [2]http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?...
  [3]http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py?...

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] (IntegrityError) null value in column post_id violates not-null constraint

2010-10-04 Thread Anton Shestakov
Hello all,

So, here's simple script that produces the error: http://pastie.org/1197851
please take a look at it, the script isn't big at all and there's
traceback for the error as well. In short, it issues a query to get a
simple object (Post) with one-to-many relation tag_links (i.e. array
of PostTagLink instances), then it does post.tag_links = [] and tries
to Session.commit() it. And fails terribly.

Guts tell me I may be doing something wrong, but I don't know what
exactly. I'll greatly appreciate your help.

Best regards, Anton Shestakov.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] (IntegrityError) null value in column post_id violates not-null constraint

2010-10-04 Thread Anton Shestakov
2010/10/4 Michael Bayer mike...@zzzcomputing.com:

 On Oct 4, 2010, at 1:34 AM, Anton Shestakov wrote:

 Hello all,

 So, here's simple script that produces the error: http://pastie.org/1197851
 please take a look at it, the script isn't big at all and there's
 traceback for the error as well. In short, it issues a query to get a
 simple object (Post) with one-to-many relation tag_links (i.e. array
 of PostTagLink instances), then it does post.tag_links = [] and tries
 to Session.commit() it. And fails terribly.

 Guts tell me I may be doing something wrong, but I don't know what
 exactly. I'll greatly appreciate your help.

 Best regards, Anton Shestakov.

 you need delete-orphan cascade.   
 http://www.sqlalchemy.org/docs/orm/tutorial.html#configuring-delete-delete-orphan-cascade


Oh. Thanks for your answer. I feel stupid for asking this question now
that is answered in the tutorial. Well, serves me right.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [sqlamp] released version 0.5.1

2009-11-29 Thread Anton Gritsay
Hi all,

The sqlamp project gets a new release -- 0.5.1. The most important
highlight is that it now supports polymorphic inheritance of nodes. It
means that your trees can now contain objects of different types,
provided that they all have one base class.

sqlamp is an implementation of materialized path for SQLAlchemy. It is
licensed under BSD and have no external dependencies except of
SQLAlchemy.

docs and examples: http://sqlamp.angri.ru/
direct download of source dist: http://sqlamp.angri.ru/sqlamp-0.5.1.tar.gz

--
angri

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] [sqlamp] Release 0.5 with ext.declarative support

2009-09-05 Thread Anton Gritsay

There was a question about support declarative extension in sqlamp:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/f0c46fd081302ec/11ec7ea99f12b377?pli=1

Starting with released today version 0.5 it is easy and clear. See
usage example in the documentation:
http://sqlamp.angri.ru/#declarative

--
Anton Gritsay,
http://angri.ru
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Materialized Path for SQLAlchemy Declarative Base

2009-08-13 Thread Anton Gritsay

Hi, Allen!

You can use something like this (yeah, I know that it isn't
declarative in any way):

class Node(Base):
__tablename__ = 'node'
id = Column(Integer, primary_key=True)
parent_id = Column(ForeignKey('node.id'))

parent = relation(Node, remote_side=[id])

mp_path = Column(sqlamp.PathField())
mp_depth = Column(sqlamp.DepthField())
mp_tree_id = Column(sqlamp.TreeIdField())

Node.mp = sqlamp.MPManager(
Node.__table__,
pk_field='id',
parent_id_field='parent_id',
path_field_name='mp_path',
depth_field_name='mp_depth',
tree_id_field_name='mp_tree_id'
)
Node.__mapper__.extension.append(Node.mp.mapper_extension)

Note that you need to define mp_* colums. You doesn't need it if you
create MPManager *before* creating mapper.

On the other hand you can use this hack (but I wouldn't recomend it):

class Node(Base):
__tablename__ = 'node'
id = Column(Integer, primary_key=True)
parent_id = Column(ForeignKey('node.id'))

parent = relation(Node, remote_side=[id])

Node.mp = sqlamp.MPManager(
Node.__table__,
pk_field='id',
parent_id_field='parent_id'
)
Node.__mapper__.extension.append(Node.mp.mapper_extension)
Node.__mapper__._configure_properties()

allen.fowler:
 Werner,

 On Aug 7, 12:36 pm, werner wbru...@free.fr wrote:
  Allen,
 
  allen.fowler wrote:
 
   On Aug 6, 6:54 pm, AF allen.fow...@yahoo.com wrote:
 
   Hello all,
 
   Has anyone here used the sqlamp: Materialized Path for SQLAlchemy
   library?
 
   I am wondering:
 
   1) Does it seem to work well?
 
   2) Did you use it with Declarative Base, and if so, how did you
   configure it?
 
   Anybody?
 
   Specifically, I am wondering about how adapt the sample code at:
 
  http://sqlamp.angri.ru/#quickstart
 
   ... so that it works with declarative base.
 
  I haven't used this library and I am no SA expert so take the following
  with a grain (or two) of salt.
 
  I would translate this:
 
  class Node(object):
      mp = sqlamp.MPManager(
          node_table, node_table.c.id, node_table.c.parent_id
      )
 
  To:
  class Node(Base):
      __table__ = sa.Table(u'node', metadata,
      sa.Column(u'id', sa.Integer(), sa.Sequence('gen_sample_id'), 
  primary_key=True, nullable=False),
      sa.Column(u'parent_id', sa.Integer(), sa.ForeignKey(u'node.id')),
  ...
      )
 
      mp = sqlamp.MPManager(
          __table__, __table__.c.id, __table__.c.parent_id)
 
  Werner

 Thank you...

 Though, that does not look at all like typical Declarative Base code
 I've seen / been using.  Why the explicit assignment to __table__?

 Further, where does the extension=[Node.mp.mapper_extension] binding
 happen?

 Thank you again,
 AF
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Logging facilities of SQLAlchemy

2009-06-13 Thread Anton Gritsay


 unfortunately the logging module included with Python adds significant
 performance overhead even if no handlers are configured.   We would like
 to be able to have fine-grained logging available in our library, but at
 the same time when the logging is disabled for significant latency to not
 be added to the application.  Take a look at the source of logging.debug()
 - a logger with a few levels of hierarchy and no handlers configured will
 have about three method calls of overhead.   Multiply that by, for
 example, a log statement for each row received by RowProxy, and a log
 statement for each column/result processor in the ORM, and it adds up to
 many dozens of method calls per row.   The primary hindrance to speed in
 Python is function calls - perhaps projects like Unladen Swallow will
 improve this but for now, Python application profiling is almost a linear
 function w.r.t number of method calls.   So we can either remove all the
 log.debug() statements we have and just not have the capability available,
 or gate them within a conditional as we've done.

Thanks, it is now clear for me.

 Though have
 you tried DEBUG level sqlalchemy.engine ?  the result set processing
 necessarily needs to be implemented as distinct log lines, so you'd still
 have the problem of associating many log lines with one record.   there's
 a connection proxy that can be used for more elaborate schemes.

For my needs debug messages is not so interesting as informational.
But maybe it is possible to add some unique identifier for each log
record params?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Caching

2007-12-22 Thread Anton V. Belyaev

 merge is working rudimentally for objects with unloaded scalar/
 instance/collection attributes in r3974.  whats not yet happening is
 the merging of the various query.options() that may be present on the
 original deferred loader, which means the merged instance wont
 necessarily maintain the exact eager/lazy/deferred loading of the
 original, but this is not especially critical for the basic idea to
 work.

 example script using merge attached.

Michael, thanks a lot for your support!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Caching

2007-12-21 Thread Anton V. Belyaev

 I rearranged instance-level deferred loaders to be serializable
 instances in r3968.  you can now pickle an instance + its _state and
 restore, and all deferred/lazy loaders will be restored as well.  I
 didnt yet test it specifically with merge() but give it a try, you
 shoudnt be getting that error anymore...the pickling issue from ticket
 #870 is also no longer present.

Unfortunately it does not work (I am now at r3973).

1) I created an object with deferred property (not None).
2) Reloaded it in a new session (to erase deferred property)
3) Pickled/Unpickled
4) Removed everything but properties and _state.
5) obj = s.merge(obj, dont_load=True)  (with a fresh session s)
6) obj.deferred_ppty = None

merge worked without an exception this time.

Thanks.

PS. Special thanks for #871 (overheads in backref). It was blocking
the full-featured use of SqlAlchemy while staying as efficient as raw
SQL for me :)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Caching

2007-12-20 Thread Anton V. Belyaev

Mike, thanks for your reply.

 what happens if you just leave _state alone ?  there shouldnt be any
 need to mess with _state (nor _entity_name).   the only attribute
 worth deleting for the cache operation is _sa_session_id so that the
 instance isnt associated with any particular session when it gets
 cached.  Id also consider using session.merge(dont_load=True) which is
 designed for use with caches (and also watch out for that log.debug(),
 debug() calls using the standard logging module are notoriously slow).

The reason for deleting _state is to save some space in cache. I save
instances to cache on get operation, so they are unmodified. But, of
course, it is internal thing so the final decision is yours :)

I gave up trying merge(dont_load=True) after running this sample:

users = Table('users', metadata,
  Column('id', Integer, primary_key=True),
  Column('name', String(100)),
  Column('surname', String(100)))

mapper(User, users,
   properties={
'surname': deferred(users.c.surname)
})

s = create_session()
u = User()
u.name = 'anton'
u.surname = 'belyaev'
s.save(u)
s.flush()

# now we need an instance with not loaded surname (because it is
deferred)
s = create_session()
u = s.query(User).get(1)

# cache it
cache = pickle.dumps(u)

# try to restore in a new session
s = create_session()
u = pickle.loads(cache)
u = s.merge(u, dont_load=True)

The latest statement fails with:

File /home/anton/eggs/lib/python2.5/site-packages/SQLAlchemy-0.4.1-
py2.5.egg/sqlalchemy/orm/session.py, line 1136, in object_session
if obj in sess:
TypeError: argument of type 'NoneType' is not iterable

Some notes on this test case:
1) If surname was a simple (not deferred) column property, merge
would work fine.
2) session.update(u) instead of merge would work fine even with
deferred column property, and the property itself would work fine (it
would load on first reference).

 the only trac ticket for this is #490, which with our current
 extension architecture is pretty easy to fix so its resolved in 3967 -
 MapperExtensions are now fully inherited.  If you apply the same
 MapperExtension explicitly to a base mapper and a subclass mapper,
 using the same ME instance will have the effect of it being applied
 only once (and using two different ME instances will have the effect
 of both being applied to the subclass separately).

I meant #870 (sorry, I should had provided reference in the first
message).

Back again to the testcase and note 2:

If, let say, I had some inheritance:

class Teacher(User):
pass

with polymorphic_fetch='deferred' (this is important), even
session.update(u) would not work. Because in this case deferred
attributes work through callables in _state, and callable does not
survive pickling.

Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Caching

2007-12-20 Thread Anton V. Belyaev

 pickle isnt going to work with deferred columns unless you implement
 __getstate__ and __setstate__.  so the issue with session.merge() is
 just an extension of that issue, correct ?  i.e. without deferreds
 merge has no issue.

 is it not reasonable to ask that objects which are to be serialized
 and cached not have any deferred columns ? (or they are explicitly
 loaded before caching )?

Sorry, I dont understand clearly.

I do understand that pickle saves only __dict__ when no __getstate__
defined.

So, to be cached, an object should fetch all its deferred columns (if
any) and provide all of them at __getstate__. Right?

And if an instance from cache has nothing for one of its deferred
column values, then referencing these properties after merge wont load
them from DB, but just fail?

Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Caching

2007-12-19 Thread Anton V. Belyaev

Hello,

Several people already wrote something about memcached + SqlAlchemy.

Remember, Mike Nelson wrote a mapper extention, it is available at:
http://www.ajaxlive.com/repo/mcmapper.py
http://www.ajaxlive.com/repo/mcache.py

I've rewritten it a bit to fit 0.4 release of SA.

Any response and comments are welcome, since I am not sure I am doing
right things in the code :) I dont like that dirty tricks with
deleting _state, etc. Maybe it could be done better?

But it works somehow. It manages to cache query get operations.

It has some problems with deferred fetch on inherited mapper because
of some issues of SA (I've found them in Trac).

import memcache as mc

class MCachedMapper(MapperExtension):
def get(self, query, ident, *args, **kwargs):
key = query.mapper.identity_key_from_primary_key(ident)
obj = query.session.identity_map.get(key)
if not obj:
mkey = gen_cache_key(key)
log.debug(Checking cache for %s, mkey)
obj = mc.get(mkey)
if obj is not None:
obj.__dict__[_state] = InstanceState(obj)
obj.__dict__[_entity_name] = None
log.debug(Found in cache for %s : %s, mkey, obj)
query.session.update(obj)
else:
obj = query._get(key, ident, **kwargs)
if obj is None:
return None
_state = obj._state
del obj.__dict__[_state]
del obj.__dict__[_entity_name]
mc.set(mkey, obj)
obj.__dict__[_state] = _state
obj.__dict__[_entity_name] = None
return obj

def before_update(self, mapper, connection, instance):
mkey =
gen_cache_key(mapper.identity_key_from_instance(instance))
log.debug(Clearing cache for %s because of update, mkey)
mc.delete(mkey)
return EXT_PASS

def before_delete(self, mapper, connection, instance):
mkey =
gen_cache_key(mapper.identity_key_from_instance(instance))
log.debug(Clearing cache for %s because of delete, mkey)
mc.delete(mkey)
return EXT_PASS

The mapper can be used like this:

mapper(User, users_table, extension=MCachedMapper())
session = create_session()
user_1234 = session.query(User).get(1234) # this one loads from the DB
session.clear()
user_1234 = session.query(User).get(1234) # this one fetches from
Memcached

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA and MySQL replication.

2007-12-09 Thread Anton V. Belyaev

On Dec 6, 11:51 pm, Andrew Stromnov [EMAIL PROTECTED] wrote:
 I have DB with onemasterserver and several replicated slaves
 (MySQL). How to implement this functionality: read-only request can be
 passed to any DB (masterorslave), but any write request with
 following read requests must be sended tomasteronly (to avoid
 synchronization lag).

This is an example of vertical partitioning. I am trying to find out
how to implement this with SA too.

Quite an offen-used scheme. Strange that no one has replied this
thread yet.

SA even has support for sharding (which is more complex than vertical
partitioning IMHO) so there certainly should be the way for 1-master-N-
slaves scheme.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unnecessary selects when cascade=all, delete-orphane and composite keys

2007-11-19 Thread Anton V. Belyaev

  I am building a grading system for students and got unexpected
  performance problems. I am using composite key for marks, which refer
  to students and subjects.

  When I am creating a mark (for student_1 and subject_1), unnecessary
  select operations are performed (select all marks for student_1 and
  select all marks for subject_1).

  Why these selects are generated and how to avoid them?

 The SQL issue looks like the marks collections on Student and
 Subject issuing a lazyload for their full collection of Mark items
 before the backref event appends the Mark object to each of them, i.e.
 the event that occurs when you issue mark.student = subject_1.
 Ordinary collections currently don't handle being present in a
 partial state, so in order for an append to occur, they load their
 contents.

 As a workaround, you can use lazy=dynamic relations for the
 collections, which is a special relation that can handle append
 operations without the full collection being available.
 lazy=noload would work as well but then you couldn't read from your
 collections.

 A future release may look into merging some of the dynamic relation
 behavior into an ordinary un-loaded collection so that this workaround
 would not be needed. Actually this might not be a bad idea for 0.4.2,
 so ive added ticket #871.

Thanks a lot for your reply!

Setting the relation to be lazy=dynamic really eliminated
unnecessary selects when creating Mark. Making a default relation a
bit dynamic is a great idea!

There is problem when relation is both lazy=dynamic and
cascade=all, delete-orphan:
When parent (Subject or Student) object is deleted, its children
(Marks) are not deleted. When relation is not dynamic, children are
deleted correctly.

Is this a proper behavior?

Thanks for the attention.

P.S. Congratulations to developers and users of SA with version 0.4.1!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Unnecessary selects when cascade=all, delete-orphane and composite keys

2007-11-18 Thread Anton V. Belyaev

Hello all,

I am building a grading system for students and got unexpected
performance problems. I am using composite key for marks, which refer
to students and subjects.

When I am creating a mark (for student_1 and subject_1), unnecessary
select operations are performed (select all marks for student_1 and
select all marks for subject_1).

Why these selects are generated and how to avoid them?

Thanks for the attention.

Here is the sample code:

mapper(Student, table_students, properties={
'marks': relation(Mark, cascade='all, delete-orphan',
backref='student')
})
mapper(Subject, table_subjects, properties={
'marks': relation(Mark, cascade='all, delete-orphan',
backref='subject')
})
mapper(Mark, table_marks)

[... skip ...]

mark = Mark()
s.save(mark)
mark.student = student_1
mark.subject = subject_1
mark.value = 5
s.flush()

Here is SQL output (shortened a bit for ease of reading):

SELECT * FROM marks WHERE 1 = marks.student_id-
unnecessary select
SELECT * FROM marks WHERE 1 = marks.subject_id -
unnecessary select
BEGIN
INSERT INTO marks (subject_id, student_id, value) VALUES (1, 1, 5)
COMMIT

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SA does implicit cascading

2007-09-03 Thread Anton V. Belyaev

Hello, here is a sample:

children_table = Table('children', metadata,
   Column('id', Integer, primary_key=True))
child2group_table = Table('child2group', metadata,
  Column('child_id', Integer,
ForeignKey('children.id'), nullable=False),
  Column('group_id', Integer,
ForeignKey('groups.id'), nullable=False))
groups_table = Table('groups', metadata,
 Column('id', Integer, primary_key=True))

mapper(Child, children_table,
   properties={ 'groups':relation(Group,
secondary=child2group_table) })

mapper(Group, groups_table, properties={
'children':relation(Child, secondary=child2group_table) })

Speaking English, this means there are groups and children, and child
can belong to some groups.

When I issue this:

group = session.query(Group).get(2)
session.delete(group)
session.flush()

SA does this:

2007-09-03 11:40:25,915 INFO sqlalchemy.engine.base.Engine.0x..8c
BEGIN
2007-09-03 11:40:25,918 INFO sqlalchemy.engine.base.Engine.0x..8c
DELETE FROM child2group WHERE child2group.child_id = ? AND
child2group.group_id = ?
2007-09-03 11:40:25,918 INFO sqlalchemy.engine.base.Engine.0x..8c [[3,
2], [4, 2]]
2007-09-03 11:40:25,919 INFO sqlalchemy.engine.base.Engine.0x..8c
DELETE FROM groups WHERE groups.id = ?
2007-09-03 11:40:25,920 INFO sqlalchemy.engine.base.Engine.0x..8c [2]
2007-09-03 11:40:25,921 INFO sqlalchemy.engine.base.Engine.0x..8c
COMMIT

Well, I dont mind, because this is what I really wanted to have.
Please, explain, why does this happen? I thought only group item would
be deleted and I would get orphaned records in child2group table.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-25 Thread Anton V. Belyaev

 t = Table('mytable', meta,
 Column(...)
 
 )

 someothermeta = MetaData()
 t2 = Table('mytable', someothermetadata, autoload=True,
 autoload_with=connection)

 assert t.compare(t2)

I believe this should be done somehow automatically. Because everyone
needs this. There should be two separate disjoint options:

1) Autoload, when working with previously created database for rapid
start. Columns arent specified at all in Python code (SQLAlchemy).
2) Tables are specified in the code. Database tables might already
exist and might not. And when issuing create_all(), all the situations
should be handled correctly:
2a) If tables exist in DB and match Python-defined, ok.
2b) If tables do not exist in DB they are created, ok.
2c) If tables exist in DB and there is mismatch with Python-defined,
an exception is raised.

If feel this is kind of natural. Though, I am not an expert in DB or
SQLAlchemy.

 but why not just use autoload=True across the board in the first
 place and eliminate the chance of any errors ?

1) I dont know if tables exist. I might need to create them.
2) When they exist, autoloading them might cause inconsistency with
SQLAlchemy-defined tables in sources. This is exactly what I am trying
to avoid.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Consistency with DB while modifying metadata

2007-07-24 Thread Anton V. Belyaev

Hey,

I believe there is a common approach to the situation, but I just dont
know it.

Let say, I have some tables created in the DB using SQLAlchemy. Then I
modify Python code, which describes the table (add a column, remove
another column,...). What is the common way to handle this situation?
I guess it would be good to have an exception raised when there is a
mismatch between DB tables and Python-defined (using SQLAlchemy).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread Anton V. Belyaev



On 24 июл, 17:34, svilen [EMAIL PROTECTED] wrote:
 On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote:

  Hey,

  I believe there is a common approach to the situation, but I just
  dont know it.

  Let say, I have some tables created in the DB using SQLAlchemy.
  Then I modify Python code, which describes the table (add a column,
  remove another column,...). What is the common way to handle this
  situation? I guess it would be good to have an exception raised
  when there is a mismatch between DB tables and Python-defined
  (using SQLAlchemy).

 Very soon i'll be in your situation (with hundreds of tables), so i'm
 very interested if something comes up.

 it's in the todo list of dbcook. my idea so far is:
  - automaticaly reverse engineer i.e. autoload the available
 db-structure into some metadata.
  - create another metadata as of current code
  - compare the 2 metadatas, and based on some rules - ??? -
 alter/migrate the DB into the new shape.
 This has to be as automatic as possible, leaving only certain - if
 any - decisions to the user.
 Assuming that the main decision - to upgrade or not to upgrade - is
 taken positive, and any locks etc explicit access is obtained.

 svil

Of course db modification is hard. It cant be done completely
automatically. For now I would like SQLAlchemy just to signal somehow
when its definitions are different from already existing db tables.
When I do create_all() it checks anyway tables properties, but doesnt
let me know when there is mismatch.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---