[sqlalchemy] Re: MSSQL default_schema

2007-08-14 Thread Christophe de VIENNE

2007/8/13, Rick Morrison [EMAIL PROTECTED]:
 That SQL log is from the table existence check.

 Although it's unclear from the trace and log as to
 whether the check is for the table create
 or for the table drop, it is correctly using
 the default schema, which is 'dbo' on all
 MSSQL platforms.

It's for the delete (which then does not happen because the table is not found)

 So, the table check and the drop are working correctly. It's the table
 create that is incorrectly creating the table in the old owner schema,
 rather than using the default schema.

 I believe that's an pyodbc-ism, as I routinely use implicit (default) schema
 on pymssql, and all tables are created in the default schema correctly.
 Paul's workaround for this will work fine for now.

 Christophe, Are you using pyodbc, or some other DBAPI?

pyodbc on a linux platform

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] [OT] Job offer

2007-08-14 Thread Grzegorz Adam Hankiewicz

Hello.

I did submit a job offer to the python web page at 
http://www.python.org/community/jobs/ but looks like it will take some 
time to be up. Since the job is related to sa I'm posting here also the 
announcement. If you are interested, or know somebody who knows somebody 
else who... contact me.

Rastertech España S.A. http://www.rastertech.es (Colmenar Viejo, Madrid, 
Spain)
==

Job Description: We are looking for a database programmer with experience
implementing Python software for the improvement of an existing database
application. Required skills:

* English both written and spoken.
* Knowledge of ORM systems, preferably SQLAlchemy.
* Previous experience implementing applications with database concurrency in
  mind.

We evaluate positively:

* Spanish both written and spoken.
* Experience dealing with clients.
* Knowledge of other programming languages (like C or Haskell).
* Willingness to travel to international clients.
* GUI programming with Dabo.
* Taking responsibility of projects as end products with the goal of perfection
  in mind.

You will be offered a permanent position in a small development team in the
process of migrating an existing application to a new concurrent database model
where both Windows applications and web users will interact with the same
data.

* Contact: Grzegorz Adam Hankiewicz
* E-mail contact: [EMAIL PROTECTED]
* Web: http://www.rastertech.es/recruit.en.html


-- 
Rastertech España S.A.
Grzegorz Adam Hankiewicz
/Jefe de Producto TeraVial/

C/ Perfumería 21. Nave I. Polígono industrial La Mina
28770 Colmenar Viejo. Madrid (España)
Tel. +34 918 467 390 (Ext.17) *·*   Fax +34 918 457 889
[EMAIL PROTECTED] *·*   www.rastertech.es 
http://www.rastertech.es/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot connect with cx_Oracle - argument type error

2007-08-14 Thread SOS

Found it! Apparently someone had written the connect string
incorrectly (a slash instead of a colon) on the pre-production
machine...


-S

On Aug 14, 12:21 pm, SOS [EMAIL PROTECTED] wrote:
 I am getting some kind of error from (apparently) cx_Oracle when
 attempting to connect. I am using cx_Oracle 4.2 on my dev box, on
 which this error does NOT happen. On the pre-production box, we had
 cx_Oracle 4.3 which gave the error and the admin then installed 4.2,
 which still gave the error (incomplete uninstall was my guess but that
 did not lead anywhere). Both use SQLAlchemy beta1. I used to get this
 on my dev machine but upgrading to cx_Oracle 4.2 fixed it (I had some
 very old version before).

 Here is the error and trace:

  /usr/lib/python2.4/site-packages/baasandmed/DataModel.py in
 initialiseDataModel(database=sqlalchemy.orm.session.Session object,
 databaseEngine=Engine(oracle://register/[EMAIL PROTECTED]))
   544 meta = MetaData(databaseEngine)
   545
   546 BAASANDMED_VIEW = Table('baasandmed_view', meta,
 autoload = True)
   547 BAASANDMED_FIELD = Table('baasandmed_field', meta,
 autoload = True)
   548 BAASANDMED_MINIREF_VALUES =
 Table('baasandmed_miniref_values', meta, autoload = True)
 BAASANDMED_VIEW undefined, global Table = class
 'sqlalchemy.schema.Table', meta = MetaData(Engine(oracle://register/
 [EMAIL PROTECTED])), autoload undefined, builtin True = True
  /usr/lib/python2.4/site-packages/sqlalchemy/schema.py in
 __call__(self=class 'sqlalchemy.schema.Table',
 name='baasandmed_view', metadata=MetaData(Engine(oracle://register/
 [EMAIL PROTECTED])), *args=(), **kwargs={})
   115 autoload_with.reflecttable(table,
 include_columns=include_columns)
   116 else:
   117
 metadata._get_bind(raiseerr=True).reflecttable(table,
 include_columns=include_columns)
   118 except exceptions.NoSuchTableError:
   119 del metadata.tables[key]
 metadata = MetaData(Engine(oracle://register/[EMAIL PROTECTED])),
 metadata._get_bind = bound method MetaData._get_bind of
 MetaData(Engine(oracle://register/[EMAIL PROTECTED])), raiseerr
 undefined, builtin True = True, ).reflecttable undefined, table =
 Table('baasandmed_view', MetaData(Engine(oracle://register/
 [EMAIL PROTECTED])), schema=None), include_columns = None
  /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py in
 reflecttable(self=Engine(oracle://register/[EMAIL PROTECTED]),
 table=Table('baasandmed_view', MetaData(Engine(oracle://register/
 [EMAIL PROTECTED])), schema=None), connection=None,
 include_columns=None)
  1085
  1086 if connection is None:
  1087 conn = self.contextual_connect()
  1088 else:
  1089 conn = connection
 conn undefined, self = Engine(oracle://register/[EMAIL PROTECTED]),
 self.contextual_connect = bound method Engine.contextual_connect of
 Engine(oracle://register/[EMAIL PROTECTED])
  /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py in
 contextual_connect(self=Engine(oracle://register/[EMAIL PROTECTED]),
 close_with_result=False, **kwargs={})
  1053 
  1054
  1055 return Connection(self,
 close_with_result=close_with_result, **kwargs)
  1056
  1057 def table_names(self, schema=None, connection=None):
 global Connection = class 'sqlalchemy.engine.base.Connection', self
 = Engine(oracle://register/[EMAIL PROTECTED]), close_with_result =
 False, kwargs = {}
  /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py in
 __init__(self=sqlalchemy.engine.base.Connection object,
 engine=Engine(oracle://register/[EMAIL PROTECTED]), connection=None,
 close_with_result=False, _branch=False)
   537 def __init__(self, engine, connection=None,
 close_with_result=False, _branch=False):
   538 self.__engine = engine
   539 self.__connection = connection or
 engine.raw_connection()
   540 self.__transaction = None
   541 self.__close_with_result = close_with_result
 self = sqlalchemy.engine.base.Connection object, self.__connection
 undefined, connection = None, engine = Engine(oracle://register/
 [EMAIL PROTECTED]), engine.raw_connection = bound method
 Engine.raw_connection of Engine(oracle://register/
 [EMAIL PROTECTED])
  /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py in
 raw_connection(self=Engine(oracle://register/[EMAIL PROTECTED]))
  1100 Return a DBAPI connection.
  1101
  1102 return self.pool.connect()
  1103
  1104 def log(self, msg):
 self = Engine(oracle://register/[EMAIL PROTECTED]), self.pool =
 sqlalchemy.pool.QueuePool object, self.pool.connect = bound method
 QueuePool.connect of sqlalchemy.pool.QueuePool object
  /usr/lib/python2.4/site-packages/sqlalchemy/pool.py in
 connect(self=sqlalchemy.pool.QueuePool object)
   157 def connect(self):
   158 if not self._use_threadlocal:
   159 return _ConnectionFairy(self).checkout()
   160
   161  

[sqlalchemy] Re: MSSQL default_schema

2007-08-14 Thread Rick Morrison
 It's for the delete (which then does not happen because the table is not
found)

Sure, but the drop is being issued in the correct default schema (dbo). The
error is not that the drop is being issued in the wrong schema, it is that
the table was *created* in the wrong schema, and so is not where it ought to
be.

 pyodbc on a linux platform

That is a problematic combination right now, currently pymssql is a better
bet on Linux unless you're working with unicode columns. We're working on
better support for pyodbc + *nix, but we first need to get through the
0.4beta cycle, as MSSQL is currently broken on the
0.4 trunk. A stable short-term platform for you would be the 0.3.10 release
or the 0.3 branch tip + pymssql.

Rick


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2007-08-14 Thread Christophe de VIENNE

2007/8/14, Rick Morrison [EMAIL PROTECTED]:
  It's for the delete (which then does not happen because the table is not
 found)

 Sure, but the drop is being issued in the correct default schema (dbo).

No it's not. If I don't enable checkfirst the table is dropped, which
means both statements are issued on the wrong schema (considering that
the check is right).

 The
 error is not that the drop is being issued in the wrong schema, it is that
 the table was *created* in the wrong schema, and so is not where it ought to
 be.

  pyodbc on a linux platform

 That is a problematic combination right now, currently pymssql is a better
 bet on Linux unless you're working with unicode columns. We're working on
 better support for pyodbc + *nix, but we first need to get through the 0.4
 beta cycle, as MSSQL is currently broken on the 0.4 trunk. A stable
 short-term platform for you would be the 0.3.10 release or the 0.3 branch
 tip + pymssql.

I'm going back and forth between both implementations, and am willing
to help a bit on mssql support. From time to time I try to see why a
unittest wouldn't pass and if I can see why I propose a patch.


Regards,

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2007-08-14 Thread Rick Morrison
 Sure, but the drop is being issued in the correct default schema (dbo).

No it's not. If I don't enable checkfirst the table is dropped, which
means both statements are issued on the wrong schema (considering that
the check is right).

Ah OK I didn't get that from the previous
messages. Then it sounds like the check is looking at the default
schema, but the issued statements aren't using it.

Looking at the code, I can see the explicit use of the default schema for
table existence checks and table reflects, but no special treatment for
normal SQL ops like SELECT/UPDATE/DELETE. The code assumes that these
details will be handled by the default ansisql.py Dialect, which would then
issue it's own checks for default schema by callbacks to get the default
schema. But at least in the 0.3 branch, those calls are never made, the SQL
generator just looks for the table.schema, which is presumably an explicit
schema set in the metadata.

Mike, are you watching this thread? Should ansisql recognize and use default
schemas, or should the DB dialect somehow override the construction of the
table name?

I'm going back and forth between both implementations, and am willing
to help a bit on mssql support. From time to time I try to see why a
unittest wouldn't pass and if I can see why I propose a patch.

That is a very much appreciated effort; making unit tests pass for MSSQL is
a big job.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Michael Bayer

I just put beta2 up.  The important things in this release are major
MS-SQL and Oracle fixes, after we got some on-the-ground testing done.

Theres also a new function we're planning to use in Pylons called
engine_from_config(), which reads a configuration dictionary (such
as from a .ini file) and returns an Engine instance.

Also some enhancements to Session.execute() which may become important
as we move more towards the session-as-executor model; if you are
using multiple engines, execute() will search through the
ClauseElement for tables that it can link to bound engines, thus
reducing the need to pass a mapper along to it.

Look for some big performance enhancements in beta3.

SQLAlchemy 0.4 beta2 is available for download at:

http://www.sqlalchemy.org/download.html


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread svilen


performance-wise - do u have any test/target for profiling? else i can 
repeat some tests i did somewhen in february (if i remember them..)

=

while looking to replace all {} with dict/Dict(), i found some things. 
Here the list, quite random, probably some can be just ignored if not 
an actual issue - i may have misunderstood things; have a look.
(btw The overall candidates for the replacement are like 60-70 lines, 
all else are kwargs or lookup-tables.)

---
database/informix.py:
  ischema_names = { ... }  has duplicate keys/entries

---
database/*
  get_col_spec(self) etc:
these string-formats may be better without the artificial dict,
 eg. return self._extend(CHAR(%(length)s) % {'length': self.length})
 - return self._extend(CHAR(%s) % self.length )
  or
 - return self._extend(CHAR(%(length)s) % self.__dict__ )
no idea if get_col_spec() is used that much to have crucial impact on 
speed though, at least it looks simpler.

---
orm.util.AliasedClauses._create_row_adapter()
class AliasedRowAdapter( object):
 1. can't this be made as standalone class, returning an instance, 
initialized with the map, which is then __call__()ed ?
 2. this can be faster if: 
a) has_key = __contains__ #instead of yet another funccall
b) __getitem__ uses try except instead of double lookup key in map

---
orm.mapper
 Mapper._instance():
WTF is the **{'key':value, ... } ?
eg. if extension.populate_instance(self, context, row, instance, 
**{'instancekey':identitykey, 'isnew':isnew}) ...
same thing is done as separate variable a page later;
btw there are several of these **{} in the file

 also, Mapper._options is redundant (leftover?) neverused


---
orm.attribute
  AttributeManager.init_attr():
the saving this one eventualy does is too small, compared to a 
property call of ._state.

  AttributeManager.register_attribute():
the  def _get_state(self) that is made into as property _state can 
be made eventualy faster with try-except instead of 'if'.

 btw: cant that ._state property be removed alltogether (i.e. made a 
plain attribute? then init_attr() MUST be there seting it up as plain 
dict.

---
orm/unitofwork
UOWTask._sort_circular_dependencies():
def get_dependency_task(obj, depprocessor):
try:
dp = dependencies[obj]
except KeyError:
dp = dependencies.setdefault(obj, {})
isnt just the setdefault() enough?

---
engine.url.
  def translate_connect_args(self, names):
this assumes the order of passed names matches the order of 
attribute_names inside... very fragile. Why not use set of kwargs 
like (attr_name=replacement_name defaulting to None), then just use 
the non empty ones?
  def _parse_rfc1738_args():
the 'opts' dict is redundant, would be cleaner if args are just 
passed to URL( name=value)

---
topological.py
   QueueDependencySorter.sort():
  'cycles' is redundant neverused variable

---
util:
  ThreadLocal:
   - wouldnt be faster if the key in the _tdict is tuple(id,key) and 
not some formatted string off these? or the key is nonhashable?
   - the engine/threadlocal.TLEngine._session() issues a hasattr() on 
such object. how does it actualy work? IMO it always fails

==
hey, thanks for the MetaData.reflect()!

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2007-08-14 Thread Rick Morrison
 Should ansisql recognize and use default schemas,
 or should the DB dialect somehow override the construction of the table
name?

The more I think about this, the more I'm becoming convinced that specifying
an implicit default schema in all generated SQL is a pretty bad idea. The
reason is that it would break a feature in a few database engines that might
be called schema cascade, or Postgres explicitly calls the schema path.

The basics of the feature is that during execution of an SQL statement, the
current schema (that is, the schema associated with the current connection)
is searched for the objects specified in the query, and the search for those
items fails, the search continues in the default schema, or along an
explicit schema path.

This allows for the construction of local, or override tables/objects
that would be seen by particular user, or role, while others would see the
other, underlying table. For example, consider the following schema / table
layout.

schema 'public':
   table 'a':
   table 'b':
   table 'c'

schema 'archive':
   table 'a'


and the query:

select a.*, b.* from a, b where b.id_a = a.id


user x might see the following underlying query plan:
select a.*, b.*  from public.a as a, public.b as b where.

while user archive might instead see:
select a.*, b.* from archive.a as a, public.b as b where.

If SA were to specify the implicit default schema in all queries, this
behavior obviously breaks.

I think the SQL that SA currently generates is actually the 'correct' SQL in
Christophe's situation:

create table foo(...)

which pyobdbc then changes into

create table user.foo(...)

where 'user' is the user name of the logged-in user. While an explicit
schema specification from SA would stop that, it breaks the schema
cascade behavior.

So I think the bug is really in pyodbc, for adding the explicit schema where
none was requested. Thoughts?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Serialize ClauseLists

2007-08-14 Thread Koen Bok

For our app we'd like the user to construct and save search queries
that we execute on the database. My first thought was to construct
something like:

predicate = and_(or_(item.c.id5, item.c.id3),
item.c.name.like('aap')))

And save this to a pickle column in the database. But it gives:

raise TypeError, can't pickle %s objects % base.__name__
TypeError: can't pickle module objects

I could write wrappers around or_ and and_ and the binaryexpressions
etc but I have the feeling there must be a better approach. Anyone got
a hint?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Brian Beck

On Aug 14, 11:29 am, Michael Bayer [EMAIL PROTECTED] wrote:
 I just put beta2 up.  The important things in this release are major
 MS-SQL and Oracle fixes, after we got some on-the-ground testing done.

 Theres also a new function we're planning to use in Pylons called
 engine_from_config(), which reads a configuration dictionary (such
 as from a .ini file) and returns an Engine instance.

 Also some enhancements to Session.execute() which may become important
 as we move more towards the session-as-executor model; if you are
 using multiple engines, execute() will search through the
 ClauseElement for tables that it can link to bound engines, thus
 reducing the need to pass a mapper along to it.

0.4 looks awesome. Some names jump out at me though:
* sessionmaker - Strange that it's a noun, and not a verb (like
make_session)
* bind/binds arguments - Strange that it's a verb, and not a noun (why
not engine/engines?)

Keep up the good work!

--
Brian Beck / Adventurer of the First Order / www.brianbeck.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Oracle sequences issue

2007-08-14 Thread Pavel Skvazh

I'm trying to make sequences in the Oracle database.
Here's my model
def init():
#conf = paste.deploy.CONFIG
#uri = conf['sqlalchemy.dburi']
if 'login' in session:
uri=1
else:
conf = paste.deploy.CONFIG
uri = conf['sqlalchemy.dburi']  /// Admin rights here
engine = create_engine(uri, convert_unicode=True)
#   meta.bind = engine
engine.echo = True
meta.connect(engine)

application_sequence = Sequence('Application_id_seq', optional=True)
application_table = Table ( 'Application', meta,
Column('ID_Application',  Integer, application_sequence,
primary_key=True),
Column('ID_Seller',   Integer, nullable=False),
Column('ID_Lot',  Integer, nullable=False),
Column('ID_Sell', Integer, nullable=False),
Column('Text_Application',String(2048)),
Column('Date_Start_Application',  DateTime,
default=func.current_timestamp()),
Column('ID_Status',   Integer),
Column('Date_Change_Application', DateTime),
Column('Date_Finish_Application', DateTime),
ForeignKeyConstraint(['ID_Sell', 'ID_Lot'], ['data.Lot.ID_Sell',
'data.Lot.ID_Lot'], ondelete=CASCADE),
schema='data'
)

Websetup is done like this

def setup_config(command, filename, section, vars):
conf = paste.deploy.appconfig('config:' + filename)
conf.update(dict(app_conf=conf.local_conf,
global_conf=conf.global_conf))
paste.deploy.CONFIG.push_process_config(conf)

uri = conf['sqlalchemy.dburi_data'] // Here i log in as the Data
scheme
engine = create_engine(uri)
print Connecting to database %s ... % uri
model.meta.connect(engine)

#   Add some basic values into the table.
//

Everything works fine during websetup. It adds the values in the
table, since i assume it can see the Sequence.

But during the work with the model, since it logs in as different
Schema, it cann't see the Sequence.

After that i changed it to
application_doc_sequence = Sequence('data.APPLICATION_DOC_ID_SEQ',
optional=True)
the model can see it just well, works great. But when it comed to
websetup it fails.

After clearing the database by hand, i decided to take a look at how
websetup will handle it from the blank database.

Now it works, but when it comes to websetup it gives me this:
name is already used by an existing object

As it turns out it creates a sequence named
'data.APPLICATION_DOC_ID_SEQ', which is not exactly right i guess.
My assumption would be that they treat this string differently. Model
can see the schema to look for, but websetup takes literally as a
string but on the other hand refuses to delete it during drop_all.

Probably that's a well known issue and it's fixed in 0.4 or maybe it's
me (which is in fact far more likely since i'm new to alchemy).
Anyway, i'd be delighted if you guys will point it out to me.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Oracle sequences issue

2007-08-14 Thread Pavel Skvazh

I'm trying to make sequences in the Oracle database.
Here's my model
def init():
#conf = paste.deploy.CONFIG
#uri = conf['sqlalchemy.dburi']
if 'login' in session:
uri=1
else:
conf = paste.deploy.CONFIG
uri = conf['sqlalchemy.dburi']  /// Admin rights here
engine = create_engine(uri, convert_unicode=True)
#   meta.bind = engine
engine.echo = True
meta.connect(engine)

application_sequence = Sequence('Application_id_seq', optional=True)
application_table = Table ( 'Application', meta,
Column('ID_Application',  Integer, application_sequence,
primary_key=True),
Column('ID_Seller',   Integer, nullable=False),
Column('ID_Lot',  Integer, nullable=False),
Column('ID_Sell', Integer, nullable=False),
Column('Text_Application',String(2048)),
Column('Date_Start_Application',  DateTime,
default=func.current_timestamp()),
Column('ID_Status',   Integer),
Column('Date_Change_Application', DateTime),
Column('Date_Finish_Application', DateTime),
ForeignKeyConstraint(['ID_Sell', 'ID_Lot'], ['data.Lot.ID_Sell',
'data.Lot.ID_Lot'], ondelete=CASCADE),
schema='data'
)

Websetup is done like this

def setup_config(command, filename, section, vars):
conf = paste.deploy.appconfig('config:' + filename)
conf.update(dict(app_conf=conf.local_conf,
global_conf=conf.global_conf))
paste.deploy.CONFIG.push_process_config(conf)

uri = conf['sqlalchemy.dburi_data'] // Here i log in as the Data
scheme
engine = create_engine(uri)
print Connecting to database %s ... % uri
model.meta.connect(engine)

#   Add some basic values into the table.
//

Everything works fine during websetup. It adds the values in the
table, since i assume it can see the Sequence.

But during the work with the model, since it logs in as different
Schema, it cann't see the Sequence.

After that i changed it to
application_doc_sequence = Sequence('data.APPLICATION_DOC_ID_SEQ',
optional=True)
the model can see it just well, works great. But when it comed to
websetup it fails.

After clearing the database by hand, i decided to take a look at how
websetup will handle it from the blank database.

Now it works, but when it comes to websetup it gives me this:
name is already used by an existing object

As it turns out it creates a sequence named
'data.APPLICATION_DOC_ID_SEQ', which is not exactly right i guess.
My assumption would be that they treat this string differently. Model
can see the schema to look for, but websetup takes literally as a
string but on the other hand refuses to delete it during drop_all.

Probably that's a well known issue and it's fixed in 0.4 or maybe it's
me (which is in fact far more likely since i'm new to alchemy).
Anyway, i'd be delighted if you guys will point it out to me.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] [SA 0.4] Getting a mapper class from a mapper class

2007-08-14 Thread Andreas Jung

I've been using the following code for obtaining a mapper class
from some other mapper.

class_mapper(self.__class__).props[name].mapper.class

'props' is no longer available however I found '_Mapper__props'.

Is this the right replacement or is there a smarter way?

Andreas

pgpLkaS4WLfE3.pgp
Description: PGP signature


[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread sdobrev

databases/sqlite: (reflecttable)
  pragma_names is missing the BOOLEAN word/type - nulltype

btw why isn't each dialect-typeclass adding it's own entry to that 
pragma_names, respectively to the colspecs ? 
Or, each class to have those pragmaword and basetype, and the dicts to 
be made by walking locals() if issubclass(..) ?

Anyway, these dicts (the grammar) should be automaticaly built from 
available typeclasses...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [SA 0.4] Getting a mapper class from a mapper class

2007-08-14 Thread Michael Bayer

theres  a method for now,  get_property().  it has some extra  
features, such as resolve_synonyms.


On Aug 14, 2007, at 3:15 PM, Andreas Jung wrote:

 I've been using the following code for obtaining a mapper class
 from some other mapper.

 class_mapper(self.__class__).props[name].mapper.class

 'props' is no longer available however I found '_Mapper__props'.

 Is this the right replacement or is there a smarter way?

 Andreas


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Serialize ClauseLists

2007-08-14 Thread Michael Bayer


On Aug 14, 2007, at 1:37 PM, Koen Bok wrote:


 For our app we'd like the user to construct and save search queries
 that we execute on the database. My first thought was to construct
 something like:

 predicate = and_(or_(item.c.id5, item.c.id3),
 item.c.name.like('aap')))

 And save this to a pickle column in the database. But it gives:


ugh !  there is a specific feature, added in 0.4, that would prevent  
this from being convenient if youre using mappers.  since we thought,  
why would anyone want to save a SQL construct ? :)  it basically will  
try to execute the expression in the next INSERT or UPDATE clause.   
youd have to pre-pickle before attaching.

im not sure how youre getting a module error in 0.3; pickling of  
metadata/table objects should work as of the most recent 0.3 release  
(but not earlier 0.3 releases).

but in 0.4 the expression currently is housing operators as functions  
attached to a class.  so ive just added ticket #735 to move those  
functions to the module level.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Michael Bayer


On Aug 14, 2007, at 3:30 PM, [EMAIL PROTECTED] wrote:


 databases/sqlite: (reflecttable)
   pragma_names is missing the BOOLEAN word/type - nulltype

 btw why isn't each dialect-typeclass adding it's own entry to that
 pragma_names, respectively to the colspecs ?
 Or, each class to have those pragmaword and basetype, and the dicts to
 be made by walking locals() if issubclass(..) ?

 Anyway, these dicts (the grammar) should be automaticaly built from
 available typeclasses...

patches welcome

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle sequences issue

2007-08-14 Thread Michael Bayer

we dont have a schema attribute on Sequence right now.  we have  
only limited testing support for issuing CREATE statements in an  
alternate schema, and those are all against Postgres (since i only  
have oracle XE here...).  adding schema to Sequence is ticket #584  
and we also need to do ticket #726 to fully support alternate-schema  
sequences.

for now id say have your CREATE SEQUENCE as a literal DDL statement  
(i.e. use text(CREATE SEQUENCE))


On Aug 14, 2007, at 2:57 PM, Pavel Skvazh wrote:


 I'm trying to make sequences in the Oracle database.
 Here's my model
 def init():
   #conf = paste.deploy.CONFIG
   #uri = conf['sqlalchemy.dburi']
   if 'login' in session:
   uri=1
   else:
   conf = paste.deploy.CONFIG
   uri = conf['sqlalchemy.dburi']  /// Admin rights here
   engine = create_engine(uri, convert_unicode=True)
 # meta.bind = engine
   engine.echo = True
   meta.connect(engine)

 application_sequence = Sequence('Application_id_seq', optional=True)
 application_table = Table ( 'Application', meta,
   Column('ID_Application',  Integer, application_sequence,
 primary_key=True),
   Column('ID_Seller',   Integer, nullable=False),
   Column('ID_Lot',  Integer, nullable=False),
   Column('ID_Sell', Integer, nullable=False),
   Column('Text_Application',String(2048)),
   Column('Date_Start_Application',  DateTime,
 default=func.current_timestamp()),
   Column('ID_Status',   Integer),
   Column('Date_Change_Application', DateTime),
   Column('Date_Finish_Application', DateTime),
   ForeignKeyConstraint(['ID_Sell', 'ID_Lot'], ['data.Lot.ID_Sell',
 'data.Lot.ID_Lot'], ondelete=CASCADE),
   schema='data'
 )

 Websetup is done like this

 def setup_config(command, filename, section, vars):
   conf = paste.deploy.appconfig('config:' + filename)
   conf.update(dict(app_conf=conf.local_conf,
 global_conf=conf.global_conf))
   paste.deploy.CONFIG.push_process_config(conf)

   uri = conf['sqlalchemy.dburi_data'] // Here i log in as the Data
 scheme
   engine = create_engine(uri)
   print Connecting to database %s ... % uri
   model.meta.connect(engine)

 #   Add some basic values into the table.
 //

 Everything works fine during websetup. It adds the values in the
 table, since i assume it can see the Sequence.

 But during the work with the model, since it logs in as different
 Schema, it cann't see the Sequence.

 After that i changed it to
 application_doc_sequence = Sequence('data.APPLICATION_DOC_ID_SEQ',
 optional=True)
 the model can see it just well, works great. But when it comed to
 websetup it fails.

 After clearing the database by hand, i decided to take a look at how
 websetup will handle it from the blank database.

 Now it works, but when it comes to websetup it gives me this:
 name is already used by an existing object

 As it turns out it creates a sequence named
 'data.APPLICATION_DOC_ID_SEQ', which is not exactly right i guess.
 My assumption would be that they treat this string differently. Model
 can see the schema to look for, but websetup takes literally as a
 string but on the other hand refuses to delete it during drop_all.

 Probably that's a well known issue and it's fixed in 0.4 or maybe it's
 me (which is in fact far more likely since i'm new to alchemy).
 Anyway, i'd be delighted if you guys will point it out to me.


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Michael Bayer


On Aug 14, 2007, at 12:38 PM, svilen wrote:



 performance-wise - do u have any test/target for profiling? else i can
 repeat some tests i did somewhen in february (if i remember them..)

look in test/perf for some, just added a mass-insert/mass-select test.

also the current branch im doing today will chop out about 60% of  
function call overhead from the abovementioned test.

 ---
 database/*
   get_col_spec(self) etc:
 these string-formats may be better without the artificial dict,
  eg. return self._extend(CHAR(%(length)s) % {'length': self.length})
  - return self._extend(CHAR(%s) % self.length )
   or
  - return self._extend(CHAR(%(length)s) % self.__dict__ )
 no idea if get_col_spec() is used that much to have crucial impact on
 speed though, at least it looks simpler.

im fine with that (not crucial tho)


 ---
 orm.util.AliasedClauses._create_row_adapter()
 class AliasedRowAdapter( object):
  1. can't this be made as standalone class, returning an instance,
 initialized with the map, which is then __call__()ed ?

is it faster to say self.map or to say map from locals() ?  its  
probably not very crucial either way.

  2. this can be faster if:
 a) has_key = __contains__ #instead of yet another funccall
 b) __getitem__ uses try except instead of double lookup key in map

im not sure try/except is faster here - im pretty sure a missing key  
is likely and exception throws are very expensive.  would be worth a  
try to see if the missing key exception actually occurs here.


 ---
 orm.mapper
  Mapper._instance():
 WTF is the **{'key':value, ... } ?
 eg. if extension.populate_instance(self, context, row, instance,
 **{'instancekey':identitykey, 'isnew':isnew}) ...
 same thing is done as separate variable a page later;
 btw there are several of these **{} in the file

i think thats a product of a refactoring where the ** was originally  
not there.


  also, Mapper._options is redundant (leftover?) neverused

yeah gone along with the **{} in r3303



 ---
 orm.attribute
   AttributeManager.init_attr():
 the saving this one eventualy does is too small, compared to a
 property call of ._state.

i havent benched this in a while but my recollection is that the  
AttributeError raise is *much* slower than pre-calling this method.   
a single function call is always faster than an exception throw.

however, i see that the exception throw is being suppressed also with  
a hasattr() being called every timeim not sure why thats that way  
now so i might change it back to throwing AttributeError.



   AttributeManager.register_attribute():
 the  def _get_state(self) that is made into as property _state can
 be made eventualy faster with try-except instead of 'if'.

exception throws are slower.  but like above says, the throw here can  
be prevented if init_attr() is called.


  btw: cant that ._state property be removed alltogether (i.e. made a
 plain attribute? then init_attr() MUST be there seting it up as plain
 dict.

it should be named something non-collisionworthy such as the current  
_sa_attr_state.


 ---
 orm/unitofwork
 UOWTask._sort_circular_dependencies():
 def get_dependency_task(obj, depprocessor):
 try:
 dp = dependencies[obj]
 except KeyError:
 dp = dependencies.setdefault(obj, {})
   isnt just the setdefault() enough?

this should just say dependencies[obj] = dp = {}


 ---
 engine.url.
   def translate_connect_args(self, names):
 this assumes the order of passed names matches the order of
 attribute_names inside... very fragile. Why not use set of kwargs
 like (attr_name=replacement_name defaulting to None), then just use
 the non empty ones?

patches welcome

   def _parse_rfc1738_args():
 the 'opts' dict is redundant, would be cleaner if args are just
 passed to URL( name=value)

this is an old patch we got from someone, improvement patches welcome


 ---
 topological.py
QueueDependencySorter.sort():
   'cycles' is redundant neverused variable

would look good in the patch too


 ---
 util:
   ThreadLocal:
- wouldnt be faster if the key in the _tdict is tuple(id,key) and
 not some formatted string off these? or the key is nonhashable?

good catch, should be patched

- the engine/threadlocal.TLEngine._session() issues a hasattr() on
 such object. how does it actualy work? IMO it always fails

patch which includes a test case to add into test/engine/ 
transaction.py would be welcome




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Michael Bayer


On Aug 14, 2007, at 2:12 PM, Brian Beck wrote:


 0.4 looks awesome. Some names jump out at me though:
 * sessionmaker - Strange that it's a noun, and not a verb (like
 make_session)

the verb would be, make_session_maker.  or make_session_class,  
except in some cases it might not be an actual class...

 * bind/binds arguments - Strange that it's a verb, and not a noun (why
 not engine/engines?)

bind is used as a noun here.  it is slightly weird and there was  
much discussion about this so youre a little late for that train :).   
the reason its not engine anymore is because it can be a Connection  
also.  engines and connections are collectively called  
connectables, but nobody understood that term...so its bind.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread sdobrev

  orm.util.AliasedClauses._create_row_adapter()
  class AliasedRowAdapter( object):
   1. can't this be made as standalone class, returning an
  instance, initialized with the map, which is then __call__()ed ?

 is it faster to say self.map or to say map from locals() ?  its
 probably not very crucial either way.
well.. u save on class' creation :-), get a class that could be used 
elsewhere, and eventualy more obvious code - runtime outer-namespace 
bindings in python are veeery strange beasts sometimes, with funny 
side effects.

   2. this can be faster if:
  a) has_key = __contains__ #instead of yet another funccall
  b) __getitem__ uses try except instead of double lookup key
  in map

 im not sure try/except is faster here - im pretty sure a missing
 key is likely and exception throws are very expensive.  would be
 worth a try to see if the missing key exception actually occurs
 here.
i'll have to check, last 3 years i've being doing everything assuming 
that a (lookup:exception) is faster than 
(lookup-if-lookup:lookup-else) in both attribute and dict lookups. 
Plus that a function call is the most expensive python thing ever... 
apart of repeated a.x a.x a.x attribute access. h time to measure 
the myths.

  ---
  orm.attribute
AttributeManager.init_attr():
  the saving this one eventualy does is too small, compared to
  a property call of ._state.

 i havent benched this in a while but my recollection is that the
 AttributeError raise is *much* slower than pre-calling this method.
 a single function call is always faster than an exception throw.

 however, i see that the exception throw is being suppressed also
 with a hasattr() being called every timeim not sure why thats
 that way now so i might change it back to throwing AttributeError.

   btw: cant that ._state property be removed alltogether (i.e.
  made a plain attribute? then init_attr() MUST be there seting it
  up as plain dict.

 it should be named something non-collisionworthy such as the
 current _sa_attr_state.
the only gain from the property is that it is readonly, i.e. 
obj._state = None is not allowed (as well as del'ete). But i can do 
obj._sa_attr_state = None (and all goes to hell). If that property 
disappears alltogether, we are left with a plain _sa_attr_state 
attribute, which can be now set to None and deleted... which is more 
or less same as before less the complications with the property.
Fair trade, no?

  ---
  util:
ThreadLocal:
 - wouldnt be faster if the key in the _tdict is tuple(id,key)
  and not some formatted string off these? or the key is
  nonhashable?

 good catch, should be patched

 - the engine/threadlocal.TLEngine._session() issues a
  hasattr() on such object. how does it actualy work? IMO it always
  fails

 patch which includes a test case to add into test/engine/
 transaction.py would be welcome

i've never used these so no much idea what to do here - was just 
passing code along my eyeline.

will prepare some patches for the other things.

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Jonathan Ellis

On 8/14/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  im not sure try/except is faster here - im pretty sure a missing
  key is likely and exception throws are very expensive.  would be
  worth a try to see if the missing key exception actually occurs
  here.
 i'll have to check, last 3 years i've being doing everything assuming
 that a (lookup:exception) is faster than
 (lookup-if-lookup:lookup-else) in both attribute and dict lookups.

Only if, like Mike said, the common case is for the key to be present.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread Michael Bayer


On Aug 14, 2007, at 4:35 PM, Michael Bayer wrote:

 On Aug 14, 2007, at 12:38 PM, svilen wrote:


 ---
 orm.attribute
   AttributeManager.init_attr():
 the saving this one eventualy does is too small, compared to a
 property call of ._state.

 i havent benched this in a while but my recollection is that the
 AttributeError raise is *much* slower than pre-calling this method.
 a single function call is always faster than an exception throw.

 however, i see that the exception throw is being suppressed also with
 a hasattr() being called every timeim not sure why thats that way
 now so i might change it back to throwing AttributeError.

the results are in, running test/prof/masseagerload.py.  this test is  
very heavy on creating new instances from mapper rows, which is where  
the initialization of _state comes in.

no init_attr(), detect missing with AttributeError
Profiled target 'masseagerload', wall time: 0.59 seconds
Profile report for target 'masseagerload' (masseagerload.prof)
57039 function calls (55962 primitive calls) in 0.489 CPU seconds

init_attr(), detect missing with AttributeError
Profiled target 'masseagerload', wall time: 0.53 seconds
57549 function calls (56472 primitive calls) in 0.426 CPU seconds

init_attr(), detect missing with hasattr
Profiled target 'masseagerload', wall time: 0.56 seconds
57549 function calls (56472 primitive calls) in 0.431 CPU seconds

no init_attr(), detect missing with hasattr
Profiled target 'masseagerload', wall time: 0.49 seconds
57039 function calls (55962 primitive calls) in 0.390 CPU seconds

im not exactly sure why the hasattr() call, being present and then  
removed, doesnt change the number of function calls.  anyway,  the  
times vary a little bit but the hasattr call, even though its  
called many more times than the AttributeError gets raised, is  
slightly faster than raising AttributeError.  so no AttributeError,  
and I like getting rid of init_attr() very much so its out in r3313.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---