[sqlalchemy] Re: how to join tables properly

2008-09-29 Thread Alex K

Hello, Ksenia,

This may help:

http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_strategies_containseager

On Sep 29, 2:38 am, Ksenia [EMAIL PROTECTED] wrote:
 Hi list,

 I can't figure out a very simple thing. All I want is to do this
 query:

 select table1.A, table2.B, table3.C from table1, table2, table3 where
 table1.foreignkey2 = table2.primarykey and table1.foreignkey3 =
 table3.primarykey;

 When rendering results, I want to be able to refer to the properties
 of the joined tables:

 td${row.A}/td
 td${row.table2.B}/td
 td${row.table3.C}/td

 My mapper looks like:

  mapper(Table1, table1,
 properties={
 'table2':relation(Table2, lazy=False,
 uselist=False),
 'table3':relation(Table3, lazy=False,
 uselist=False)
 }
 )

 When I run it it creates a query with tables joined via LEFT OUER
 JOIN.

 How can I non-lazy (with minimal number of queries) join tables,
 without LEFT OUTER JOIN? But still be able to refer to the properties
 of joined tables?

 Appreciate a hint.

 Thanks
 Ksenia.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Force Load all attributes during a Query

2008-09-29 Thread Harish K Vishwanath
Hello,

Is there a way to tell the session to force load all the attributes of an
object during a query? (i,e even if it is already present in the Identity
Map?).

For some reason, auto_expire_on_commit, if set to True is not working well
with my app. I always do a session.query(object) before getting an object.
If there is a way to force all my queries by default to load all attributes
of the object, if would be great.

-- 
Regards,
Harish

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Setting up a many-to-many relation with no right-hand-side table

2008-09-29 Thread Nick Murdoch

Hi everyone,

I have an unusual situation where I have a set of users that are
stored in an SQL DB as usual, and a set of news articles that are
stored elsewhere (not in an SQL database).

Usually, if both sets were stored in two DB tables, I'd be able to set
up a many-to-many relationship with them in SQLAlchemy with a joining
table. I'd have the following:

--

users_table = Table('tg_user', metadata,
Column('user_id', Integer, primary_key=True),
Column('user_name', Unicode(16), unique=True),
Column('email_address', Unicode(255), unique=True),
Column('display_name', Unicode(255)),
Column('password', Unicode(40)),
Column('created', DateTime, default=datetime.now),
)

articles_table = Table('articles', metadata,
Column('article_id', Integer, primary_key=True),
...
)

user_articles_table = Table('user_articles', metadata),
Column('user_id', Integer, ForeignKey('tg_user.user_id')),
Column('article_id', Integer, ForeignKey('articles.article_id')),
)

class User(object): pass
class Article(object): pass

mapper(User, users_table,
properties=dict(
articles=relation(Article, secondary=user_articles_table)))

--

What I need to do instead, with no Articles table, is store in
user_articles_table  the user_id and article_id as I would usually,
but also have  user.articles  available as a list of integers (or some
wrapper that provided the integers).

Example:

 user = session.query(User).getFirst()
 user.user_id
1
 user.articles
[1, 294, 239, 390, 20, 489, 9378]
 user.articles.append(1000)
 session.commit() # INSERT INTO user_articles (user_id, session_id) VALUES 
 (1, 1000);


Is this possible using SQLAlchemy? I'm aware I could simply perform
inserts and selects manually on the user_articles_table, but I'd quite
like SA to take care of that for me if at all possible.

I'm using SA 0.4

Thanks,

Nick
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Setting up a many-to-many relation with no right-hand-side table

2008-09-29 Thread Alex K

Not sure about append, but you can definitely use column_property for
generated select property, more details here:

http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_mapper_expressions

Alex

On 29 сент, 16:03, Nick Murdoch [EMAIL PROTECTED] wrote:
 Hi everyone,

 I have an unusual situation where I have a set of users that are
 stored in an SQL DB as usual, and a set of news articles that are
 stored elsewhere (not in an SQL database).

 Usually, if both sets were stored in two DB tables, I'd be able to set
 up a many-to-many relationship with them in SQLAlchemy with a joining
 table. I'd have the following:

 --

 users_table = Table('tg_user', metadata,
 Column('user_id', Integer, primary_key=True),
 Column('user_name', Unicode(16), unique=True),
 Column('email_address', Unicode(255), unique=True),
 Column('display_name', Unicode(255)),
 Column('password', Unicode(40)),
 Column('created', DateTime, default=datetime.now),
 )

 articles_table = Table('articles', metadata,
 Column('article_id', Integer, primary_key=True),
 ...
 )

 user_articles_table = Table('user_articles', metadata),
 Column('user_id', Integer, ForeignKey('tg_user.user_id')),
 Column('article_id', Integer, ForeignKey('articles.article_id')),
 )

 class User(object): pass
 class Article(object): pass

 mapper(User, users_table,
 properties=dict(
 articles=relation(Article, secondary=user_articles_table)))

 --

 What I need to do instead, with no Articles table, is store in
 user_articles_table  the user_id and article_id as I would usually,
 but also have  user.articles  available as a list of integers (or some
 wrapper that provided the integers).

 Example:

  user = session.query(User).getFirst()
  user.user_id
 1
  user.articles

 [1, 294, 239, 390, 20, 489, 9378]

  user.articles.append(1000)
  session.commit() # INSERT INTO user_articles (user_id, session_id) VALUES 
  (1, 1000);

 Is this possible using SQLAlchemy? I'm aware I could simply perform
 inserts and selects manually on the user_articles_table, but I'd quite
 like SA to take care of that for me if at all possible.

 I'm using SA 0.4

 Thanks,

 Nick
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Dynamic mapping strategy

2008-09-29 Thread Alex K

Hi All,

My application uses XML schema to load tables, classes, set SQLA
mapping and relations.

Each processing node (thread or process, depends on config) does not
know beforehand what type of mapping it will process, that's why I
need to create mapping (parse schema, and generate classes) per each
request.

This approach worked fine for me, however  I have issues when there is
high load on the same processing node - SQLA throws an exception
saying that my dynamic class is not mapped. (Class app.User is not
mapped)

I understand that it may be some app/configuration issue (and I'm
almost sure it is).

Docs say that mapping should be created only at the app level, I
thought that violating this rule can cause errors of this kind.

I thought that clear_mappers can help here, but I fear it will clear
all mappers in the process,  since _mapper_registry is a module level
global hash.

So, any a recipe for dynamic mapping, or maybe some ideas on how can I
play with config are higly appreciated.

Regards,
Alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Best way to populate with data?

2008-09-29 Thread Arthur Pemberton

What is the best way to populate an app with data, for base data and
test data purposes?

Is there an easy way to maintain some XML files with the base and
tests data? or is it the best way to just have a python script with
the data hard coded into it?

-- 
Fedora 9 : sulphur is good for the skin
( www.pembo13.com )

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Setting up a many-to-many relation with no right-hand-side table

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 8:03 AM, Nick Murdoch wrote:


 What I need to do instead, with no Articles table, is store in
 user_articles_table  the user_id and article_id as I would usually,
 but also have  user.articles  available as a list of integers (or some
 wrapper that provided the integers).

 Example:

 user = session.query(User).getFirst()
 user.user_id
 1
 user.articles
 [1, 294, 239, 390, 20, 489, 9378]
 user.articles.append(1000)
 session.commit() # INSERT INTO user_articles (user_id,  
 session_id) VALUES (1, 1000);


 Is this possible using SQLAlchemy? I'm aware I could simply perform
 inserts and selects manually on the user_articles_table, but I'd quite
 like SA to take care of that for me if at all possible.


we have the wrapper thing, its the associationproxy.   docs are  
here:  http://www.sqlalchemy.org/docs/04/plugins.html#plugins_associationproxy 
   .

In this case you'd probably map a class to the user_articles_table and  
use associationproxy to create user.articles against article id.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic mapping strategy

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 9:30 AM, Alex K wrote:


 Hi All,

 My application uses XML schema to load tables, classes, set SQLA
 mapping and relations.

 Each processing node (thread or process, depends on config) does not
 know beforehand what type of mapping it will process, that's why I
 need to create mapping (parse schema, and generate classes) per each
 request.

 This approach worked fine for me, however  I have issues when there is
 high load on the same processing node - SQLA throws an exception
 saying that my dynamic class is not mapped. (Class app.User is not
 mapped)

 I understand that it may be some app/configuration issue (and I'm
 almost sure it is).

 Docs say that mapping should be created only at the app level, I
 thought that violating this rule can cause errors of this kind.

OK, what we mean more specifically there is that the mapper is created  
in the same scope as the class.  If you are creating classes local to  
a function, you can create the mapper there as well.

That you have issues during high load is suspect, however.

 I thought that clear_mappers can help here, but I fear it will clear
 all mappers in the process,  since _mapper_registry is a module level
 global hash.

this is correct.   I wouldn't use clear_mappers here. If your  
classes fall out of scope and are garbage collected, the corresponding  
mappings should also fall out of scope, *provided no remaining mappers  
reference them*.   This behavior is also tested more fully in 0.5 and  
I can't guarantee the automatic garbage collection as strongly in 0.4.

The overall idea of removing partial sets of mapped classes from a  
larger group is not something we've tested or tried to support.It  
wont work, for example, if you leave a mapper in place which is parent  
mapper of one of the removed mappers, or if the remaining mapper has  
relations to the removed mapper.   The results are not defined in  
those cases.

Another nasty side effect is that the compile phase of mappers is  
globally mutexed.   If you're creating mappings per request, the  
compilation stage is mutexted against all other requests.   This is  
all a product of the fact that mappings deal with a global registry.   
Additionally, since mapper compilation is only meant to occur once per  
application, its not at all optimized for per-request performance and  
will make for a slightly more sluggish application if called constantly.

It wouldn't be very difficult for the ORM to work using a Pylons-like  
stacked object proxy approach where the mapper registry and its  
mutex are swapped in and out on a per-request basis but its not clear  
right now that this is something that should be built in.

The most unclear aspect of this, to me, is why you need to drop  
anything at all.   It seems that the incoming XML schema defines a set  
of tables to be worked with.   Certainly, the number of tables which  
an instance of your application uses is fixed.It seems like it  
would be very easy to just check the XML schema's identifier and look  
up in a dictionary that this set of mappings has already been  
defined.   You'd have none of the issues related to clearing partial  
mappings and you wouldn't have the enormous overhead of parsing  
schemas and recompiling a set of mappers on every request.







--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Uppercase column names in table.insert({ })

2008-09-29 Thread Itamar Ravid

Hey guys - I've ran into a strange bug in 0.4.7p1 while trying to make
use of Elixir's versioned plugin. In Oracle, given a table created as
such:

CREATE TABLE books
(book_id NUMBER PRIMARY KEY);

The following code fails:

dbEng = create_engine(oracle://:@pearl)
meta = MetaData()
meta.bind = dbEng

booksTable = Table(books, meta, autoload=True)

booksTable.insert({'BOOK_ID': 200}).execute()

Whilst the following, succeeds:

booksTable.insert({'book_id': 200}).execute()

This is strange, considering the fact that the following:

result = booksTable.select().execute().fetchone()
print result.keys()

... results in the column names in uppercase.

Am I doing anything wrong?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic mapping strategy

2008-09-29 Thread Alex K

Thanks for such a detailed response!

I'll try schema caching approach and will bring back the results,

Thanks again,
Alex

On 29 сент, 18:21, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 29, 2008, at 9:30 AM, Alex K wrote:





  Hi All,

  My application uses XML schema to load tables, classes, set SQLA
  mapping and relations.

  Each processing node (thread or process, depends on config) does not
  know beforehand what type of mapping it will process, that's why I
  need to create mapping (parse schema, and generate classes) per each
  request.

  This approach worked fine for me, however  I have issues when there is
  high load on the same processing node - SQLA throws an exception
  saying that my dynamic class is not mapped. (Class app.User is not
  mapped)

  I understand that it may be some app/configuration issue (and I'm
  almost sure it is).

  Docs say that mapping should be created only at the app level, I
  thought that violating this rule can cause errors of this kind.

 OK, what we mean more specifically there is that the mapper is created
 in the same scope as the class.  If you are creating classes local to
 a function, you can create the mapper there as well.

 That you have issues during high load is suspect, however.

  I thought that clear_mappers can help here, but I fear it will clear
  all mappers in the process,  since _mapper_registry is a module level
  global hash.

 this is correct.   I wouldn't use clear_mappers here. If your
 classes fall out of scope and are garbage collected, the corresponding
 mappings should also fall out of scope, *provided no remaining mappers
 reference them*.   This behavior is also tested more fully in 0.5 and
 I can't guarantee the automatic garbage collection as strongly in 0.4.

 The overall idea of removing partial sets of mapped classes from a
 larger group is not something we've tested or tried to support.It
 wont work, for example, if you leave a mapper in place which is parent
 mapper of one of the removed mappers, or if the remaining mapper has
 relations to the removed mapper.   The results are not defined in
 those cases.

 Another nasty side effect is that the compile phase of mappers is
 globally mutexed.   If you're creating mappings per request, the
 compilation stage is mutexted against all other requests.   This is
 all a product of the fact that mappings deal with a global registry.
 Additionally, since mapper compilation is only meant to occur once per
 application, its not at all optimized for per-request performance and
 will make for a slightly more sluggish application if called constantly.

 It wouldn't be very difficult for the ORM to work using a Pylons-like
 stacked object proxy approach where the mapper registry and its
 mutex are swapped in and out on a per-request basis but its not clear
 right now that this is something that should be built in.

 The most unclear aspect of this, to me, is why you need to drop
 anything at all.   It seems that the incoming XML schema defines a set
 of tables to be worked with.   Certainly, the number of tables which
 an instance of your application uses is fixed.It seems like it
 would be very easy to just check the XML schema's identifier and look
 up in a dictionary that this set of mappings has already been
 defined.   You'd have none of the issues related to clearing partial
 mappings and you wouldn't have the enormous overhead of parsing
 schemas and recompiling a set of mappers on every request.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Uppercase column names in table.insert({ })

2008-09-29 Thread Michael Bayer



This is the expected behavior.  SQLA operates in a case sensitive  
fashion whenever a table or column identifier is given in mixed case  
or upper case.   Use all lower case for case insensitive.  Since  
SQLA seeks to provide a database-agnostic API to the backend, this  
includes Oracle as well.case sensitive means that the identifier  
will be quoted, in which case the database expects to match the  
identifier against an identifier of the exact same characters.   
Oracle's usual UPPERCASE identifiers are in fact case insensitive.
So for case insensitive identifiers, make sure you use all lower case  
names like 'book_id'.

The keys() method of the ResultProxy, OTOH, doesn't attempt to  
editorialize what comes back from the cursor, so in the case of oracle  
you get upper case names (these are acceptable to use as keys for  
row['somekey']).   While we maybe could try to connect the original  
select() statement's SQLA-encoded column names back to the  
cursor.description's keys and return them as defined on the SQLAlchemy  
side, you'd still get the uppercase names when we didn't have that  
information, like execute(select * from table).   I have a vague  
recollection of someone having a specific issue with that behavior but  
I'm not finding what it was at the moment.


On Sep 29, 2008, at 10:04 AM, Itamar Ravid wrote:


 Hey guys - I've ran into a strange bug in 0.4.7p1 while trying to make
 use of Elixir's versioned plugin. In Oracle, given a table created as
 such:

 CREATE TABLE books
 (book_id NUMBER PRIMARY KEY);

 The following code fails:

 dbEng = create_engine(oracle://:@pearl)
 meta = MetaData()
 meta.bind = dbEng

 booksTable = Table(books, meta, autoload=True)

 booksTable.insert({'BOOK_ID': 200}).execute()

 Whilst the following, succeeds:

 booksTable.insert({'book_id': 200}).execute()

 This is strange, considering the fact that the following:

 result = booksTable.select().execute().fetchone()
 print result.keys()

 ... results in the column names in uppercase.

 Am I doing anything wrong?

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to join tables properly

2008-09-29 Thread Ksenia

Hi Alex,

Thanks a lot for your help! This is exactly what I needed. I almost
gave it up before and started to write plain SQL to get the desired
results ;)

Works like a charm now. In case anyone interested, I am pasting the
results below (BTW it's an interface to a (custom) table from Roundup
issue tracker)

Ksenia.


class Ticket(RoundupModel):
@classmethod
def _get_base_query(cls):
assigned_alias = aliased(User)
creator_alias = aliased(User)
actor_alias = aliased(User)
result = session.query(Ticket).join(
Ticket.project,
Project.customer,
Ticket.priority,
Ticket.status,(creator_alias,
Ticket.creator),(actor_alias, Ticket.actor)
).outerjoin(
Ticket.assignedtodepartment,
(assigned_alias, Ticket.assignedto)
).options(
contains_eager(Ticket.project),
contains_eager(Ticket.project, Project.customer),
contains_eager(Ticket.priority),
contains_eager(Ticket.status),
contains_eager(Ticket.assignedtodepartment),
contains_eager(Ticket.assignedto,
alias=assigned_alias),
contains_eager(Ticket.creator,
alias=creator_alias),
contains_eager(Ticket.actor, alias=actor_alias),
)
return result


And the relevant part of the mapper:
mapper(Ticket, tickets,
properties={
'status':relation(TicketStatus),
'priority':relation(TicketPriority),
'actor':relation(User, primaryjoin=users.c.id ==
tickets.c._actor),
'creator':relation(User, primaryjoin=users.c.id ==
tickets.c._creator),
'project':relation(Project),
'assignedto':relation(User, primaryjoin=users.c.id
== tickets.c._assignedto),
'assignedtodepartment':relation(Department,
primaryjoin=departments.c.id == tickets.c._assignedtodepartment),
}
)


On Sep 29, 11:19 am, Alex K [EMAIL PROTECTED] wrote:
 Hello, Ksenia,

 This may help:

 http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relatio...

 On Sep 29, 2:38 am, Ksenia [EMAIL PROTECTED] wrote:

  Hi list,

  I can't figure out a very simple thing. All I want is to do this
  query:

  select table1.A, table2.B, table3.C from table1, table2, table3 where
  table1.foreignkey2 = table2.primarykey and table1.foreignkey3 =
  table3.primarykey;

  When rendering results, I want to be able to refer to the properties
  of the joined tables:

  td${row.A}/td
  td${row.table2.B}/td
  td${row.table3.C}/td

  My mapper looks like:

   mapper(Table1, table1,
                  properties={
                      'table2':relation(Table2, lazy=False,
  uselist=False),
                      'table3':relation(Table3, lazy=False,
  uselist=False)
                  }
              )

  When I run it it creates a query with tables joined via LEFT OUER
  JOIN.

  How can I non-lazy (with minimal number of queries) join tables,
  without LEFT OUTER JOIN? But still be able to refer to the properties
  of joined tables?

  Appreciate a hint.

  Thanks
  Ksenia.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Uppercase column names in table.insert({ })

2008-09-29 Thread Itamar Ravid
Thanks for the answer, Mike. I was used to Oracle's behavior while writing
raw SQL, in which the case of unquoted column identifiers doesn't matter.
This behavior seems reasonable enough, although the inconsistency between
the cursor description and SQLA's column identifiers could throw some people
off.

This causes the versioned plugin from Elixir.ext to fail on Oracle, so I'll
submit a patch against it to lowercase column names in table.insert()'s.
Thanks again.

On Mon, Sep 29, 2008 at 5:39 PM, Michael Bayer [EMAIL PROTECTED]wrote:




 This is the expected behavior.  SQLA operates in a case sensitive
 fashion whenever a table or column identifier is given in mixed case
 or upper case.   Use all lower case for case insensitive.  Since
 SQLA seeks to provide a database-agnostic API to the backend, this
 includes Oracle as well.case sensitive means that the identifier
 will be quoted, in which case the database expects to match the
 identifier against an identifier of the exact same characters.
 Oracle's usual UPPERCASE identifiers are in fact case insensitive.
 So for case insensitive identifiers, make sure you use all lower case
 names like 'book_id'.

 The keys() method of the ResultProxy, OTOH, doesn't attempt to
 editorialize what comes back from the cursor, so in the case of oracle
 you get upper case names (these are acceptable to use as keys for
 row['somekey']).   While we maybe could try to connect the original
 select() statement's SQLA-encoded column names back to the
 cursor.description's keys and return them as defined on the SQLAlchemy
 side, you'd still get the uppercase names when we didn't have that
 information, like execute(select * from table).   I have a vague
 recollection of someone having a specific issue with that behavior but
 I'm not finding what it was at the moment.


 On Sep 29, 2008, at 10:04 AM, Itamar Ravid wrote:

 
  Hey guys - I've ran into a strange bug in 0.4.7p1 while trying to make
  use of Elixir's versioned plugin. In Oracle, given a table created as
  such:
 
  CREATE TABLE books
  (book_id NUMBER PRIMARY KEY);
 
  The following code fails:
 
  dbEng = create_engine(oracle://:@pearl)
  meta = MetaData()
  meta.bind = dbEng
 
  booksTable = Table(books, meta, autoload=True)
 
  booksTable.insert({'BOOK_ID': 200}).execute()
 
  Whilst the following, succeeds:
 
  booksTable.insert({'book_id': 200}).execute()
 
  This is strange, considering the fact that the following:
 
  result = booksTable.select().execute().fetchone()
  print result.keys()
 
  ... results in the column names in uppercase.
 
  Am I doing anything wrong?
 
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Declarative and common fields

2008-09-29 Thread Joril

Hi everyone!
I'm new to SQLAlchemy and I'm using version 0.5rc1..
I need every entity class to have a few common fields, so I tried
writing an abstract base class, declarative-style, that every other
entity class would subclass. So for example:

---
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, String, DateTime, Integer

ORMBase = declarative_base()

class BaseObject(ORMBase):
id = Column(Integer, primary_key=True)
creation_time = Column(DateTime)
modify_time = Column(DateTime)

class TestEntity(BaseObject):
value = Column(String)
---

But SQLAlchemy complains that
sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does not
have a mapped_table specified.  (Are you using the return value of
table.create()?  It no longer has a return value.)

Is there a way to tell SQLAlchemy to treat BaseObject like a non-
mapped-class? I tried using ORMBase as mixin to TestEntity (so
BaseObject extends object and TestEntity extends BaseObject and
ORMBase), but now I get a

sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could not
assemble any primary key columns for mapped table 'tests'

so I guess that maybe I'm going down the wrong road.. Am I doing
something that Declarative doesn't like? :) Should I try Elixir
instead?

Many thanks for your time!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread az

i dont know if elixir has such 'attribute/behaviour injectors', i have 
DBCOOK_no_mapping=True in dbcook.sf.net that does that. the machinery 
behdin it is soewhat tricky (x in class.__dict__ and not in 
base_class.__dict__ etc stuff)

u may try your stuff as mixin, that may or may not work.

On Monday 29 September 2008 18:01:00 Joril wrote:
 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every
 other entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
 id = Column(Integer, primary_key=True)
 creation_time = Column(DateTime)
 modify_time = Column(DateTime)

 class TestEntity(BaseObject):
 value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does
 not have a mapped_table specified.  (Are you using the return value
 of table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could
 not assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

 Many thanks for your time!
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Gaetan de Menten

On Mon, Sep 29, 2008 at 5:01 PM, Joril [EMAIL PROTECTED] wrote:

 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every other
 entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
id = Column(Integer, primary_key=True)
creation_time = Column(DateTime)
modify_time = Column(DateTime)

 class TestEntity(BaseObject):
value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does not
 have a mapped_table specified.  (Are you using the return value of
 table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could not
 assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

I don't know whether this is currently possible with Declarative or
not. In the case it isn't, patching Declarative should be quite easy
(but I don't know if such a patch would be accepted or not). If you
don't want to go down that route, Elixir does support that pattern.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Joril

 I don't know whether this is currently possible with Declarative or
 not. In the case it isn't, patching Declarative should be quite easy
 (but I don't know if such a patch would be accepted or not).

I see.. I'll wait a bit then, maybe one of the developers will tell us
if it'd be acceptable :)

 If you don't want to go down that route, Elixir does support that pattern.

Do you mean these
http://elixir.ematia.de/trac/wiki/FAQ#HowdoIaddfunctionalitytoallmyentitiestothebaseclass
http://elixir.ematia.de/trac/wiki/FAQ#HowdoIprovideadifferentbaseclassthanEntity
?

The comment Note that in this case you'll lose all default methods
provided by the Entity class makes me think that extending Entity
isn't supported even in Elixir, have I got it wrong?
Thanks again!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 11:01 AM, Joril wrote:


 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every other
 entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
id = Column(Integer, primary_key=True)
creation_time = Column(DateTime)
modify_time = Column(DateTime)

 class TestEntity(BaseObject):
value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does not
 have a mapped_table specified.  (Are you using the return value of
 table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could not
 assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

 Many thanks for your time!

the Column objects that are present on each declarative class are  
unique to that class, so the creation of those three Column objects  
would have to occur for each class.  This suggests that the correct  
approach would be to extend the declarative metaclass to provide this  
behavior:

from sqlalchemy import *
from sqlalchemy.ext.declarative import DeclarativeMeta, declarative_base

class MyDefaults(DeclarativeMeta):
 def __init__(cls, classname, bases, dict_):
 dict_['id'] = Column(Integer, primary_key=True)
 dict_['creation_time'] = Column(DateTime)
 dict_['modify_time'] = Column(DateTime)
 return DeclarativeMeta.__init__(cls, classname, bases, dict_)

Base = declarative_base(metaclass=MyDefaults)

class TestEntity(Base):
 __tablename__ = 'test'
 value = Column(String)

print TestEntity.id == 5
print TestEntity.creation_time



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: realize a join between two tables over a mediate table - SQLAlchemy-0.4.7

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 1:38 PM, [EMAIL PROTECTED] wrote:


 Hello,

 I'm using 'SQLAlchemy-0.4.7' and I want to realize a join between two
 tables over a mediate table. So I have written following code:

 ---
 class MSanctionEntity(Mapper):

class_name = 'hapo.common.entity.SanctionEntity'
table_name = 'ent_entity'

  //table 'ent_entity' = parent-table
  //table 'cou2ent' = mediate table
  //'hapo.common.countries.Country' is mapped on table
 ''cou_country' (child)


 @mapper_property
 ('hapo.common.countries.Country',secondary_table='cou2ent')
def citizens(self, mapper, cou_mapper, metadata, username,
 cou2ent):
return sqlalchemy.orm.relation(
cou_mapper,
uselist=True,
secondary= cou2ent
)
 ---

 But I get this error message:

  File /home/kikr/buildout-eggs/SQLAlchemy-0.4.7p1-py2.4.egg/
 sqlalchemy/orm/properties.py, line 542, in __determine_joins
raise exceptions.ArgumentError(Could not determine join condition
 between parent/child tables on relation %s.  
 ArgumentError: Could not determine join condition between parent/child
 tables on relation SanctionEntity.citizens (Country).  Specify a
 'primaryjoin' expression.  If this is a many-to-many relation,
 'secondaryjoin' is needed as well.

 ---

 debug informations:

 (Pdb) self.parent.local_table
 Table('ent_entity', MetaData(XXX)), Column(u'ent_origin_id',
 PGBigInteger(), table=ent_entity, nullable=False), Column('lis_key',
 Integer(),
 ForeignKey('_portalUserGroup.asl_allowed_sanction_list_view.lis_id'),
 table=ent_entity), Column(u'ent_id', PGBigInteger(),
 table=ent_entity, primary_key=True, nullable=False),
 Column(u'reg_key', PGBigInteger(),
 ForeignKey(u'productiv.reg_regulation.reg_id'), table=ent_entity,
 nullable=False), schema='productiv')


 (Pdb) self.secondary
 Table('cou2ent', MetaData(TLEngine(XXX)), Column(u'ent_key',
 PGBigInteger(), ForeignKey(u'productiv.ent_entity.ent_id'),
 table=cou2ent, nullable=False), Column(u'cou_key', PGChar(length=2,
 convert_unicode=False, assert_unicode=None),
 ForeignKey(u'cou_country.cou_iso2'), table=cou2ent, nullable=False),
 Column(u'origin_id', PGBigInteger(), table=cou2ent, nullable=False),
 Column(u'reg_key', PGBigInteger(),
 ForeignKey(u'productiv.reg_regulation.reg_id'), table=cou2ent,
 nullable=False), schema='productiv')


 (Pdb) self.mapper.local_table
 Table('cou_country', MetaData(TLEngine(XXX)), Column(u'cou_iso2',
 PGChar(length=2, convert_unicode=False, assert_unicode=None),
 table=cou_country, primary_key=True, nullable=False),
 Column(u'cou_name', PGString(length=80, convert_unicode=False,
 assert_unicode=None), table=cou_country, nullable=False),
 Column(u'cou_printable_name', PGString(length=80,
 convert_unicode=False, assert_unicode=None), table=cou_country,
 nullable=False), Column(u'cou_iso3', PGChar(length=3,
 convert_unicode=False, assert_unicode=None), table=cou_country),
 Column(u'cou_numcode', PGSmallInteger(), table=cou_country),
 Column(u'cou_cctld', PGChar(length=2, convert_unicode=False,
 assert_unicode=None), table=cou_country), schema='public')

 ---

 I don't see the bug in debug informations. Please help me.

my guess would be confusion over schema names.  cou_country is  
explicitly in the public schema based on the Schema argument so a  
foreign key referencing it would need to indicate public as well.

See what happens if, using the raw Table objects, you say  
ent_entity.join(cou2ent), as well as cou2end.join(cou_country).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] realize a join between two tables over a mediate table - SQLAlchemy-0.4.7

2008-09-29 Thread kai . krackler

Hello,

I'm using 'SQLAlchemy-0.4.7' and I want to realize a join between two
tables over a mediate table. So I have written following code:

---
class MSanctionEntity(Mapper):

class_name = 'hapo.common.entity.SanctionEntity'
table_name = 'ent_entity'

  //table 'ent_entity' = parent-table
  //table 'cou2ent' = mediate table
  //'hapo.common.countries.Country' is mapped on table
''cou_country' (child)

 
@mapper_property('hapo.common.countries.Country',secondary_table='cou2ent')
def citizens(self, mapper, cou_mapper, metadata, username,
cou2ent):
return sqlalchemy.orm.relation(
cou_mapper,
uselist=True,
secondary= cou2ent
)
---

But I get this error message:

  File /home/kikr/buildout-eggs/SQLAlchemy-0.4.7p1-py2.4.egg/
sqlalchemy/orm/properties.py, line 542, in __determine_joins
raise exceptions.ArgumentError(Could not determine join condition
between parent/child tables on relation %s.  
ArgumentError: Could not determine join condition between parent/child
tables on relation SanctionEntity.citizens (Country).  Specify a
'primaryjoin' expression.  If this is a many-to-many relation,
'secondaryjoin' is needed as well.

---

debug informations:

(Pdb) self.parent.local_table
Table('ent_entity', MetaData(XXX)), Column(u'ent_origin_id',
PGBigInteger(), table=ent_entity, nullable=False), Column('lis_key',
Integer(),
ForeignKey('_portalUserGroup.asl_allowed_sanction_list_view.lis_id'),
table=ent_entity), Column(u'ent_id', PGBigInteger(),
table=ent_entity, primary_key=True, nullable=False),
Column(u'reg_key', PGBigInteger(),
ForeignKey(u'productiv.reg_regulation.reg_id'), table=ent_entity,
nullable=False), schema='productiv')


(Pdb) self.secondary
Table('cou2ent', MetaData(TLEngine(XXX)), Column(u'ent_key',
PGBigInteger(), ForeignKey(u'productiv.ent_entity.ent_id'),
table=cou2ent, nullable=False), Column(u'cou_key', PGChar(length=2,
convert_unicode=False, assert_unicode=None),
ForeignKey(u'cou_country.cou_iso2'), table=cou2ent, nullable=False),
Column(u'origin_id', PGBigInteger(), table=cou2ent, nullable=False),
Column(u'reg_key', PGBigInteger(),
ForeignKey(u'productiv.reg_regulation.reg_id'), table=cou2ent,
nullable=False), schema='productiv')


(Pdb) self.mapper.local_table
Table('cou_country', MetaData(TLEngine(XXX)), Column(u'cou_iso2',
PGChar(length=2, convert_unicode=False, assert_unicode=None),
table=cou_country, primary_key=True, nullable=False),
Column(u'cou_name', PGString(length=80, convert_unicode=False,
assert_unicode=None), table=cou_country, nullable=False),
Column(u'cou_printable_name', PGString(length=80,
convert_unicode=False, assert_unicode=None), table=cou_country,
nullable=False), Column(u'cou_iso3', PGChar(length=3,
convert_unicode=False, assert_unicode=None), table=cou_country),
Column(u'cou_numcode', PGSmallInteger(), table=cou_country),
Column(u'cou_cctld', PGChar(length=2, convert_unicode=False,
assert_unicode=None), table=cou_country), schema='public')

---

I don't see the bug in debug informations. Please help me.

Thanks,
Kai.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to join tables properly

2008-09-29 Thread Michael Bayer


On Sep 28, 2008, at 6:38 PM, Ksenia wrote:


 Hi list,

 I can't figure out a very simple thing. All I want is to do this
 query:


 select table1.A, table2.B, table3.C from table1, table2, table3 where
 table1.foreignkey2 = table2.primarykey and table1.foreignkey3 =
 table3.primarykey;

 When rendering results, I want to be able to refer to the properties
 of the joined tables:


 td${row.A}/td
 td${row.table2.B}/td
 td${row.table3.C}/td

 My mapper looks like:

 mapper(Table1, table1,
properties={
'table2':relation(Table2, lazy=False,
 uselist=False),
'table3':relation(Table3, lazy=False,
 uselist=False)
}
)


 When I run it it creates a query with tables joined via LEFT OUER
 JOIN.

 How can I non-lazy (with minimal number of queries) join tables,
 without LEFT OUTER JOIN? But still be able to refer to the properties
 of joined tables?


remove the lazy=False from each relation(); leave eager loading as an  
option which you'd enable for an individual Query if desired.

For your SQL to be generated, say:

session.query(Table1.A, Table2.B,  
Table3 
.C 
).filter 
(Table1 
.foreignkey2 
==Table2.primarykey).filter(Table1.foreignkey3==Table3.primarykey)

To create the equivalent using JOIN while taking advantage of the  
relations:

session.query(Table1.A, Table2.B,  
Table3.C).join(Table1.table2).join(Table1.table3)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mysql to sqlalchemy table definition in .py file?

2008-09-29 Thread Lukasz Szybalski

Hello,
I've asked this about 6 months ago and I was wondering if there was
any progress

I have existing mysql database, and I would like to connect to it via
sqlalchemy using autoload and generate a python file style schema out
of it.

I want to migrate my existing mysql database to a mydb.py file which I
can then use as my base for further development of the database.

The last time I checked there were program to move from one database
to another but it didn't support mysql.

Thanks,
Lucas

-- 
Python and OpenOffice documents and templates
http://lucasmanual.com/mywiki/OpenOffice
Fast and Easy Backup solution with Bacula
http://lucasmanual.com/mywiki/Bacula

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLAlchemy usage in TurboGears

2008-09-29 Thread Christoph Zwerschke

Hi Michael,

we're just discussing some issues with SQLAlchemy over at the TurboGears 
mailing list which appeared because the new SQLAlchemy usage as promoted 
in the 0.5 docs differs from the classical usage. We are unsure how to 
solve these issues, so I thought it will be best to get some input and 
suggestions from you.

In the TurboGears project, our standard application templates for 
SQLAlchemy create models in the classical (non-declarative) way.

To make this a bit more convenient, TurboGears creates a scoped session 
and maps the model classes with this session's mapper (i.e. with a 
contextual, session-aware mapper that is made available to the 
TurboGears application). This worked very well for us, but now with SA 
0.5, the declarative usage becomes more popular, which uses a normal 
(not contextual) mapper. The existence of contextual mappers is not even 
mentioned any more in the SA 0.5 docs. Is this an omission or is my 
impression right that you want to phase out this kind of usage?

Another point is that our default scoped_session(create_session) in 
TurboGears creates sessions with autoflush=False and autocommit=True, 
while defaults sessionmaker() sessions in SQLAlchemy now have the 
opposite, autoflush=True and autocommit=False. So maybe our usage is 
outdated in that regard, too?

What kind of usage do you suggest to provide as default in our 
TurboGears app templates that will be as convenient for the average user 
and as future-safe as possible?

-- Christoph

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mysql, use password from my.cnf file

2008-09-29 Thread Tom H

Depending on security preferences, it may be desirable to keep
database passwords out of code files.

One of the recommendations for MySQL is to keep the password in
a .my.cnf file accessible only to the user.
http://dev.mysql.com/doc/refman/5.0/en/password-security.html

To use the .my.cnf password in sqlalchemy, do something like the
following:

from sqlalchemy.engine.url import URL
myDB = URL(drivername='mysql', host='localhost',
database='my_database_name',
  query={ 'read_default_file' : '/path/to/.my.cnf' } )
engine = create_engine(name_or_url=myDB)
# use the engine as usual, no password needed in your code file :)

This has been very useful to me, hopefully others find it helpful as
well.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy usage in TurboGears

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 5:22 PM, Christoph Zwerschke wrote:


 Hi Michael,

 we're just discussing some issues with SQLAlchemy over at the  
 TurboGears
 mailing list which appeared because the new SQLAlchemy usage as  
 promoted
 in the 0.5 docs differs from the classical usage. We are unsure how to
 solve these issues, so I thought it will be best to get some input and
 suggestions from you.

 In the TurboGears project, our standard application templates for
 SQLAlchemy create models in the classical (non-declarative) way.

 To make this a bit more convenient, TurboGears creates a scoped  
 session
 and maps the model classes with this session's mapper (i.e. with a
 contextual, session-aware mapper that is made available to the
 TurboGears application). This worked very well for us, but now with SA
 0.5, the declarative usage becomes more popular, which uses a normal
 (not contextual) mapper. The existence of contextual mappers is not  
 even
 mentioned any more in the SA 0.5 docs. Is this an omission or is my
 impression right that you want to phase out this kind of usage?

 Another point is that our default scoped_session(create_session) in
 TurboGears creates sessions with autoflush=False and autocommit=True,
 while defaults sessionmaker() sessions in SQLAlchemy now have the
 opposite, autoflush=True and autocommit=False. So maybe our usage is
 outdated in that regard, too?

 What kind of usage do you suggest to provide as default in our
 TurboGears app templates that will be as convenient for the average  
 user
 and as future-safe as possible?

your current approach of using classical mappings with Session.mapper  
is entirely supported.   sessionmaker() and create_session retain the  
same defaults as they did in 0.4 as well.   The name of the  
transactional flag has been renamed to autocommit with a flip in  
its boolean status, but the transactional flag works throughout 0.5  
with a deprecation warning.

So as far as future safe, you're entirely fine.

Now as far as where its going, we are pushing the declarative syntax,  
particularly for people in the up and running mindset, because it  
suits a large number of applications and removes the common SQLA  
criticism that our configuration is too verbose.   Declarative is in  
fact just a very thin configurational layer on top of the standard  
mapper() and Table constructs, so it works with all current mapper  
configurations, including Session.mapper - just pass Session.mapper as  
the mapper keyword argument in declarative_base().  The declarative  
system is sparse and open ended, works in conjunction with explicit  
Table objects as well as explict mapper() setups, can be customized at  
the metaclass or base class level, and can be used *without* a  
metaclass too which is how Grok is doing it.   So if your users seem  
to prefer that syntax, you probably want to push TG towards that  
approach.

As far as why Session.mapper itself isn't talked about much, the  
existence of that feature is to me, a historical one dating back to  
the 0.1 series.   It provides three pieces of functionality - a  
default __init__() method which populates an instance from keyword  
arguments, a query attribute which is shorthand for  
Session.query(cls), and it automatically saves the instance to the  
Session upon init (now called add()).   The most controversial  
function is the auto-save.  Back when the feature was more popular, we  
regularly would get users that were confused when they tried to work  
with explicit sessions and add objects to them, since they were  
magically saved on construction.  When the autoflush option was  
introduced, we had users who couldn't use it because it would flush()  
the object before __init__ had completed, for __init__ methods that  
used the Query to locate dependent data.  It saves before __init__  
proceeds, and we couldn't even change that because it would break  
other people's code.

Because SQLA really doesn't want to be in the framework business,  
we've pulled way back on Session.mapper.  The __init__() method it  
provides is trivial to implement in a base class, and is also provided  
when using the declarative extension.  The query attribute is also  
trivial to implement, but we've added a helper onto Session called  
query_property() which provides this.  If an application wishes to  
have a certain set of behaviors for all of its mapped classes, we  
really want developers to create their own base classes which provide  
whatever functionality they want.  This idea has been prevalent since  
we introduced 0.4.

So without reading the thread, I think declarative would probably be  
good, you can use it with Session.mapper if you want, though I view  
Session.mapper as somewhat legacy.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To 

[sqlalchemy] Re: BINARY Columns in MS SQL

2008-09-29 Thread John Hampton

Rick Morrison wrote:
   The first step here is to get a full binary round trip working with
   only pyodbc, no SQLA in use.
 
 Well, that's how I got as far as I did, but that was with straight text, 
 no bind params

I did a little more digging into this and found out that pyodbc with 
straight hex numbers works fine, but when using a buffer object it borks.


 
   so this is likely pyodbc assigning quoting.
 
 OK then, dead end #2. I'll return to this when I've got a few more minutes.


It also looks as though this is only a pyodbc issue on linux as all of 
the pyodbc tests pass on windows.  I'm suspecting some weird behavior 
between pyodbc, unixODBC and FreeTDS.

I have opened a ticket on the pyodbc [1] project page and the dev is 
going to take a look at it.

Hopefully we'll have an answer soon.

-John

[1] 
http://sourceforge.net/tracker/index.php?func=detailaid=2136938group_id=162557atid=824254

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---