Re: [sqlalchemy] Entity name - multiple schema - relationship

2014-05-17 Thread Richard Kuesters
hi! sorry for bringing this up so late - sometimes we just have no time to
get in sync with all threads :) this is a subject that interests me. i
mean, let me explain my current condition:

i have a set of declarative classes that i get by calling a function,
passing it's declarative base as argument (so i can use them more then
once, which happens a lot).

reading about tometadata() made me think about simplifying this process:
cloning / copying stubs of my classes to new metadata, or even simplifying
my classes. what would be the appropriate approach, if any?

today i have this (and is not something i like):

*def get_group_baz(decl_base_obj):*

*class Foo(decl_base_obj):*
*... # class definition*

*class Bar(decl_base_obj):*
* # class definition*

*return dict(foo_cls=Foo, bar_cls=Bar)*


what i think would be more appropriate, but does relationships and all
special arguments (tablename, tableargs, etc) be a @declared_attr?:

*class Foo(object):*
*__abstract__ = True  # necessary in this case?*
*... # same definition of Foo*

*class Bar(object):*
*__abstract__ = True  # ?*
*... # same definition*


*def run():*
*cls_to_app1 = dict(foo_cls=declarative_base(cls=Foo, name=App1Foo),
bar_cls=declarative_base(cls=Bar, name=App1Bar))*
*cls_to_app2 = dict(foo_cls=declarative_base(cls=Foo, name=App2Foo),
bar_cls=declarative_base(cls=Bar, name=App2Bar))*


or i should have a declarative base with metadata bound to nothing and then
clone / copy / do something else?


best regards,
richard.


On Tue, May 6, 2014 at 12:21 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 set_shard is a special method added by the horizontal sharding extension.

 you can do cross schema queries if you organize the schema names in terms
 of which ones apply to the “dynamic” shard and which ones to the “fixed”
 shard, if that’s how it works.

 If OTOH you literally need to join against multiple, dynamically named
 shards at one time, then you need to spell those out explicitly.it gets
 more ugly but if you want a Table that is on the fly linked to a certain
 schema explicitly you can use table.tometadata(), see
 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=tometadata#sqlalchemy.schema.Table.tometadata
 .



 On May 6, 2014, at 3:06 AM, Julien Meyer julien.mey...@gmail.com wrote:

 My real database schema is a little more complex.
 In reality, I have one database by company. In each database, I have
 multiple schemas who contain the same table structure.

 The solution schema name execution will not work in the case when I need
 to access to more than one schema by request.

 The Horizontal sharding can work : one engine by schema and set the search
 path when creating the engine. During the request processing, I can
 identify wich schema to use and with the use of set_shard on the Query
 object (not found in the documentation, normal ?), I can easely select the
 good shard to use.

 But I don't know how I can make a cross schema query in this case?

 Le lundi 5 mai 2014 19:12:06 UTC+2, Michael Bayer a écrit :

 part of a feature that will make this kind of thing more direct is the
 “schema name execution argument” feature, which is https://bitbucket.org/
 zzzeek/sqlalchemy/issue/2685/default-schema-as-an-execution-argument.

 This application is somewhat of a “multi-tenancy” application;
 technically its horizontally partitioned but if you know “society” up front
 and for the duration of an operation, you can just set that and be done
 with it.

 Assuming this is the case an easy way to do this for now is just to set
 the “search path” on your postgresql connection before such an operation
 proceeds.   That way when you refer to table X or Y, it will be in terms of
 whatever search path you’ve set, see 5.7.3 at http://www.postgresql.org/
 docs/8.1/static/ddl-schemas.html.

 There’s no need in that case to use any kind of explicit “horizontal
 sharding”.Only if you need queries that are going to refer to multiple
 schemas at once does the HS feature come into play (and if that were the
 case I’d look into PG table inheritance).




 On May 5, 2014, at 8:41 AM, Julien Meyer julien...@gmail.com wrote:

 I need some help and advices to create a mapping.

 The context :
 - Multiple schemas on postgresql (dynamic number and name) who store the
 same tables.
 - SQLAlchemy used into a pyramid web application.

 Example :
 A table Customer and a table CustomerOrder (link by customer.id) and
 a schema by society (not know before running)


 I read the documentation about horizontal, vertical sharding and entity
 name but I'm a little bit confused about the good solution to solve my
 problem.

 If I use Entity name, I don't know how to configure the relationship
 between my two dynamic classes because I need to specify a class at
 configuration time but i really know the real subclasses only at runtime.

 If I use the Horizontal sharding, I need to have an 

Re: [sqlalchemy] Entity name - multiple schema - relationship

2014-05-17 Thread Michael Bayer
Just a thought if you're really mapping tons of classes on the fly as some kind 
of en masse table gateway maybe look at automap

Sent from my iPhone

 On May 17, 2014, at 1:43 PM, Richard Kuesters rkuest...@gmail.com wrote:
 
 hi! sorry for bringing this up so late - sometimes we just have no time to 
 get in sync with all threads :) this is a subject that interests me. i mean, 
 let me explain my current condition:
 
 i have a set of declarative classes that i get by calling a function, passing 
 it's declarative base as argument (so i can use them more then once, which 
 happens a lot).
 
 reading about tometadata() made me think about simplifying this process: 
 cloning / copying stubs of my classes to new metadata, or even simplifying my 
 classes. what would be the appropriate approach, if any?
 
 today i have this (and is not something i like):
 
 def get_group_baz(decl_base_obj):
 
 class Foo(decl_base_obj):
 ... # class definition
 
 class Bar(decl_base_obj):
  # class definition
 
 return dict(foo_cls=Foo, bar_cls=Bar)
 
 what i think would be more appropriate, but does relationships and all 
 special arguments (tablename, tableargs, etc) be a @declared_attr?:
 
 class Foo(object):
 __abstract__ = True  # necessary in this case?
 ... # same definition of Foo
 
 class Bar(object):
 __abstract__ = True  # ?
 ... # same definition
 
 
 def run():
 cls_to_app1 = dict(foo_cls=declarative_base(cls=Foo, name=App1Foo), 
 bar_cls=declarative_base(cls=Bar, name=App1Bar))
 cls_to_app2 = dict(foo_cls=declarative_base(cls=Foo, name=App2Foo), 
 bar_cls=declarative_base(cls=Bar, name=App2Bar))
 
 or i should have a declarative base with metadata bound to nothing and then 
 clone / copy / do something else?
 
 
 best regards,
 richard.
 
 
 On Tue, May 6, 2014 at 12:21 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 set_shard is a special method added by the horizontal sharding extension.
 
 you can do cross schema queries if you organize the schema names in terms of 
 which ones apply to the “dynamic” shard and which ones to the “fixed” shard, 
 if that’s how it works.
 
 If OTOH you literally need to join against multiple, dynamically named 
 shards at one time, then you need to spell those out explicitly.it gets 
 more ugly but if you want a Table that is on the fly linked to a certain 
 schema explicitly you can use table.tometadata(), see 
 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=tometadata#sqlalchemy.schema.Table.tometadata.
 
 
 
 On May 6, 2014, at 3:06 AM, Julien Meyer julien.mey...@gmail.com wrote:
 
 My real database schema is a little more complex.
 In reality, I have one database by company. In each database, I have 
 multiple schemas who contain the same table structure.
 
 The solution schema name execution will not work in the case when I need 
 to access to more than one schema by request.
 
 The Horizontal sharding can work : one engine by schema and set the search 
 path when creating the engine. During the request processing, I can 
 identify wich schema to use and with the use of set_shard on the Query 
 object (not found in the documentation, normal ?), I can easely select the 
 good shard to use.
 
 But I don't know how I can make a cross schema query in this case? 
 
 Le lundi 5 mai 2014 19:12:06 UTC+2, Michael Bayer a écrit :
 
 part of a feature that will make this kind of thing more direct is the 
 “schema name execution argument” feature, which is 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2685/default-schema-as-an-execution-argument.
 
 This application is somewhat of a “multi-tenancy” application; technically 
 its horizontally partitioned but if you know “society” up front and for 
 the duration of an operation, you can just set that and be done with it.
 
 Assuming this is the case an easy way to do this for now is just to set 
 the “search path” on your postgresql connection before such an operation 
 proceeds.   That way when you refer to table X or Y, it will be in terms 
 of whatever search path you’ve set, see 5.7.3 at 
 http://www.postgresql.org/docs/8.1/static/ddl-schemas.html.
 
 There’s no need in that case to use any kind of explicit “horizontal 
 sharding”.Only if you need queries that are going to refer to multiple 
 schemas at once does the HS feature come into play (and if that were the 
 case I’d look into PG table inheritance).
 
 
 
 
 On May 5, 2014, at 8:41 AM, Julien Meyer julien...@gmail.com wrote:
 
 I need some help and advices to create a mapping.
 
 The context : 
 - Multiple schemas on postgresql (dynamic number and name) who store the 
 same tables.
 - SQLAlchemy used into a pyramid web application.
 
 Example :
 A table Customer and a table CustomerOrder (link by customer.id) and 
 a schema by society (not know before running)
 
 
 I read the documentation about horizontal, vertical sharding and entity 
 name but I'm a little bit confused about the good 

Re: [sqlalchemy] Entity name - multiple schema - relationship

2014-05-17 Thread Richard Gerd Kuesters
 

in fact, i map classes against different metadata for different schemas,
since i like to have specialized parts of my app distributed in the
database (the postgres part). another part of my app generated sqlite
databases on the fly, based on the same structures conceived earlier.
kinda strange situation to explain, in a matter of fact. 

Em 2014-05-17 18:37, Michael Bayer escreveu: 

 Just a thought if you're really mapping tons of classes on the fly as some 
 kind of en masse table gateway maybe look at automap
 
 Sent from my iPhone 
 
 On May 17, 2014, at 1:43 PM, Richard Kuesters rkuest...@gmail.com wrote:
 
 hi! sorry for bringing this up so late - sometimes we just have no time to 
 get in sync with all threads :) this is a subject that interests me. i mean, 
 let me explain my current condition: 
 
 i have a set of declarative classes that i get by calling a function, passing 
 it's declarative base as argument (so i can use them more then once, which 
 happens a lot). 
 
 reading about tometadata() made me think about simplifying this process: 
 cloning / copying stubs of my classes to new metadata, or even simplifying my 
 classes. what would be the appropriate approach, if any? 
 
 today i have this (and is not something i like): 
 
 DEF GET_GROUP_BAZ(DECL_BASE_OBJ): 
 
 CLASS FOO(DECL_BASE_OBJ): 
 ... # CLASS DEFINITION 
 
 CLASS BAR(DECL_BASE_OBJ): 
  # CLASS DEFINITION 
 
 RETURN DICT(FOO_CLS=FOO, BAR_CLS=BAR) 
 
 what i think would be more appropriate, but does relationships and all 
 special arguments (tablename, tableargs, etc) be a @declared_attr?: 
 
 CLASS FOO(OBJECT): 
 __ABSTRACT__ = TRUE # NECESSARY IN THIS CASE? 
 ... # SAME DEFINITION OF FOO 
 
 CLASS BAR(OBJECT): 
 __ABSTRACT__ = TRUE # ? 
 ... # SAME DEFINITION 
 
 DEF RUN(): 
 CLS_TO_APP1 = DICT(FOO_CLS=DECLARATIVE_BASE(CLS=FOO, NAME=APP1FOO), 
 BAR_CLS=DECLARATIVE_BASE(CLS=BAR, NAME=APP1BAR)) 
 CLS_TO_APP2 = DICT(FOO_CLS=DECLARATIVE_BASE(CLS=FOO, NAME=APP2FOO), 
 BAR_CLS=DECLARATIVE_BASE(CLS=BAR, NAME=APP2BAR)) 
 
 or i should have a declarative base with metadata bound to nothing and then 
 clone / copy / do something else? 
 
 best regards, 
 richard. 
 
 On Tue, May 6, 2014 at 12:21 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 set_shard is a special method added by the horizontal sharding extension. 
 
 you can do cross schema queries if you organize the schema names in terms of 
 which ones apply to the dynamic shard and which ones to the fixed shard, 
 if that's how it works. 
 
 If OTOH you literally need to join against multiple, dynamically named shards 
 at one time, then you need to spell those out explicitly. it gets more ugly 
 but if you want a Table that is on the fly linked to a certain schema 
 explicitly you can use table.tometadata(), see 
 http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html?highlight=tometadata#sqlalchemy.schema.Table.tometadata
  [1]. 
 
 On May 6, 2014, at 3:06 AM, Julien Meyer julien.mey...@gmail.com wrote: 
 
 My real database schema is a little more complex. 
 In reality, I have one database by company. In each database, I have multiple 
 schemas who contain the same table structure. 
 
 The solution schema name execution will not work in the case when I need to 
 access to more than one schema by request. 
 
 The Horizontal sharding can work : one engine by schema and set the search 
 path when creating the engine. During the request processing, I can identify 
 wich schema to use and with the use of set_shard on the Query object (not 
 found in the documentation, normal ?), I can easely select the good shard to 
 use. 
 But I don't know how I can make a cross schema query in this case? 
 
 Le lundi 5 mai 2014 19:12:06 UTC+2, Michael Bayer a écrit : 
 part of a feature that will make this kind of thing more direct is the 
 schema name execution argument feature, which is 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2685/default-schema-as-an-execution-argument
  [2]. 
 
 This application is somewhat of a multi-tenancy application; technically 
 its horizontally partitioned but if you know society up front and for the 
 duration of an operation, you can just set that and be done with it. 
 
 Assuming this is the case an easy way to do this for now is just to set the 
 search path on your postgresql connection before such an operation 
 proceeds. That way when you refer to table X or Y, it will be in terms of 
 whatever search path you've set, see 5.7.3 at 
 http://www.postgresql.org/docs/8.1/static/ddl-schemas.html [3]. 
 
 There's no need in that case to use any kind of explicit horizontal 
 sharding. Only if you need queries that are going to refer to multiple 
 schemas at once does the HS feature come into play (and if that were the case 
 I'd look into PG table inheritance). 
 
 On May 5, 2014, at 8:41 AM, Julien Meyer julien...@gmail.com [4] wrote: 
 
 I need some help and advices to create a mapping. 
 
 The context : 
 - Multiple schemas on postgresql