[sqlalchemy] Outerjoin with a subset of columns

2007-08-15 Thread mc

Hi,

Note the following code:

oj=outerjoin(s,f)
r1=select([oj]).execute().fetchall()
r2=select([oj.left.c.id, oj.right.c.status]).execute().fetchall()

The first select executes a LEFT OUTER JOIN and returns all columns.
I wanted only 2 specific columns, so I tried the 2nd select.
That does not execute a LEFT OUTER JOIN, though.

How do I achieve my goal?

TIA


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Outerjoin with a subset of columns

2007-08-15 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of mc
 Sent: 15 August 2007 14:23
 To: sqlalchemy
 Subject: [sqlalchemy] Outerjoin with a subset of columns
 
 
 Hi,
 
 Note the following code:
 
 oj=outerjoin(s,f)
 r1=select([oj]).execute().fetchall()
 r2=select([oj.left.c.id, oj.right.c.status]).execute().fetchall()
 
 The first select executes a LEFT OUTER JOIN and returns all columns.
 I wanted only 2 specific columns, so I tried the 2nd select.
 That does not execute a LEFT OUTER JOIN, though.
 
 How do I achieve my goal?
 

You want to use the 'from_obj' parameter for select. Something like:

select([s.c.id, f.c.status], from_obj=[oj])

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Outerjoin with a subset of columns

2007-08-15 Thread mc

It certainly did help .
many thanks


On Aug 15, 4:41 pm, King Simon-NFHD78 [EMAIL PROTECTED]
wrote:
  -Original Message-
  From: sqlalchemy@googlegroups.com
  [mailto:[EMAIL PROTECTED] On Behalf Of mc
  Sent: 15 August 2007 14:23
  To: sqlalchemy
  Subject: [sqlalchemy] Outerjoin with a subset of columns

  Hi,

  Note the following code:

  oj=outerjoin(s,f)
  r1=select([oj]).execute().fetchall()
  r2=select([oj.left.c.id, oj.right.c.status]).execute().fetchall()

  The first select executes a LEFT OUTER JOIN and returns all columns.
  I wanted only 2 specific columns, so I tried the 2nd select.
  That does not execute a LEFT OUTER JOIN, though.

  How do I achieve my goal?

 You want to use the 'from_obj' parameter for select. Something like:

 select([s.c.id, f.c.status], from_obj=[oj])

 Hope that helps,

 Simon


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 04:26:31 Michael Bayer wrote:
 On Aug 14, 2007, at 4:35 PM, Michael Bayer wrote:
  On Aug 14, 2007, at 12:38 PM, svilen wrote:
  ---
  orm.attribute
AttributeManager.init_attr():
  the saving this one eventualy does is too small, compared to
  a property call of ._state.
 
  i havent benched this in a while but my recollection is that the
  AttributeError raise is *much* slower than pre-calling this
  method. a single function call is always faster than an exception
  throw.
 
  however, i see that the exception throw is being suppressed also
  with a hasattr() being called every timeim not sure why thats
  that way now so i might change it back to throwing
  AttributeError.

 the results are in, running test/prof/masseagerload.py.  this test
 is very heavy on creating new instances from mapper rows, which is
 where the initialization of _state comes in.

 no init_attr(), detect missing with AttributeError
 Profiled target 'masseagerload', wall time: 0.59 seconds
 Profile report for target 'masseagerload' (masseagerload.prof)
 57039 function calls (55962 primitive calls) in 0.489 CPU seconds

 init_attr(), detect missing with AttributeError
 Profiled target 'masseagerload', wall time: 0.53 seconds
 57549 function calls (56472 primitive calls) in 0.426 CPU seconds

 init_attr(), detect missing with hasattr
 Profiled target 'masseagerload', wall time: 0.56 seconds
 57549 function calls (56472 primitive calls) in 0.431 CPU seconds

 no init_attr(), detect missing with hasattr
 Profiled target 'masseagerload', wall time: 0.49 seconds
 57039 function calls (55962 primitive calls) in 0.390 CPU seconds

 im not exactly sure why the hasattr() call, being present and
 then removed, doesnt change the number of function calls.  anyway, 
 the times vary a little bit but the hasattr call, even though its
 called many more times than the AttributeError gets raised, is
 slightly faster than raising AttributeError.  so no AttributeError,
 and I like getting rid of init_attr() very much so its out in
 r3313.

heh. 

First, the init_attr as it was, has never worked - it sets up a 
private __sa_attr_state, while what is used is plain non-private 
_sa_attr_state attribute (note starting underscores).

Second, i went to r3312, let init_attr() set a _state as plain dict 
and removed _state as property. The difference plain-dict/property 
(in favor of plain dict) is like 2-3%.

property:
loaded  10  items each with  500 subitems
523034 function calls (512957 primitive calls) in 2.556 CPU s
loaded  30  items each with  500 subitems
1564374 function calls (1534297 primitive calls) in 7.796 CPU s
loaded  16  items each with  1500 subitems
2499436 function calls (2451359 primitive calls) in 12.518 CPU s

plain dict:
loaded  10  items each with  500 subitems
513014 function calls (502937 primitive calls) in 2.525 CPU s
loaded  30  items each with  500 subitems
1534314 function calls (1504237 primitive calls) in 7.623 CPU s
loaded  16  items each with  1500 subitems
2451404 function calls (2403327 primitive calls) in 12.196 CPU s

up to you

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Hi All,

Recently I upgraded to the version 3.9 of SA. Post that whenever I am trying
to save characters in different language in the table I am getting the below
exception:

 File /src/sqlalchemy/engine/base.py,
line 601, in _execute
   raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
position 1: ordinal not in range(128)

I am wondering why it is using ascii codec instead of unicode ?

FYI: I am using MySQL 4.1 and the charset of table is utf-8.

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread jason kirtland

Arun Kumar PG wrote:
 Hi All,
 
 Recently I upgraded to the version 3.9 of SA. Post that whenever I am 
 trying to save characters in different language in the table I am 
 getting the below exception:
 
  File /src/sqlalchemy/engine/base.py,
 line 601, in _execute
raise exceptions.SQLError(context.statement, context.parameters, e)
 SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
 position 1: ordinal not in range(128)
 
 I am wondering why it is using ascii codec instead of unicode ?
 
 FYI: I am using MySQL 4.1 and the charset of table is utf-8.

Odd to see ascii there instead of latin1.  Is your database configured 
for utf-8 client connections?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
All tables are having a charset of utf8. Additionally, I am issuing SET
NAMES 'utf8' statement as a part of connection establishment.

Anything that is wrong here or missing ?

On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun Kumar PG wrote:
  Hi All,
 
  Recently I upgraded to the version 3.9 of SA. Post that whenever I am
  trying to save characters in different language in the table I am
  getting the below exception:
 
   File /src/sqlalchemy/engine/base.py,
  line 601, in _execute
 raise exceptions.SQLError(context.statement, context.parameters, e)
  SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
  position 1: ordinal not in range(128)
 
  I am wondering why it is using ascii codec instead of unicode ?
 
  FYI: I am using MySQL 4.1 and the charset of table is utf-8.

 Odd to see ascii there instead of latin1.  Is your database configured
 for utf-8 client connections?


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Strange behaviour when mapping tables without a real PK

2007-08-15 Thread Alberto Valverde
Hi,

I've encountered a strange behaviour when mapping a table which  
hasn't got a PrimaryKeyConstraint declared but a primary_key is  
declared when building the mapper (in order to be able to map it).

Queries through the mapper work fine until the limit keyword is  
used. Am I doing something wrong?

Attached is a simple script that reproduces the problem on SA 0.3.10

Thanks,

Alberto


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

The following query fails when using limit and no primary key is declared
at the table level but is declared on the mapper. Exception raised:

exceptions.SQLError(context.statement, context.parameters, e)
sqlalchemy.exceptions.SQLError: (OperationalError) no such column: tbl_row_count.hub_time u'SELECT hub.rel2_id AS hub_rel2_id, hub.rel1_id AS hub_rel1_id, anon_9feb.data AS anon_9feb_data, anon_9feb.id AS anon_9feb_id, anon_d6ec.data AS anon_d6ec_data, anon_d6ec.id AS anon_d6ec_id, hub.time AS hub_time, hub.data AS hub_data \nFROM hub LEFT OUTER JOIN rel2 AS anon_9feb ON anon_9feb.id = hub.rel2_id LEFT OUTER JOIN rel1 AS anon_d6ec ON anon_d6ec.id = hub.rel1_id ORDER BY tbl_row_count.hub_time DESC, anon_9feb.oid, anon_d6ec.oid'

Run script for full traceback.

from sqlalchemy import *

meta = MetaData()
engine = create_engine('sqlite:///:memory:', echo=True)
meta.bind = engine

# Tables
rel1 = Table('rel1', meta,
Column('id', Integer, primary_key=True),
Column('data', Unicode),
)

rel2 = Table('rel2', meta,
Column('id', Integer, primary_key=True),
Column('data', Unicode),
)


hub = Table('hub', meta,
Column('time', DateTime, nullable=False),
Column('rel1_id', Integer, ForeignKey('rel1.id'), nullable=False),
Column('rel2_id', Integer, ForeignKey('rel2.id'), nullable=False),

# Mapped schema doesn't use a primary key since it's a hub table
# in a fact system and time doesn't have enough resolution to guarantee
# uniqueness. 
#XXX Uncommenting the following line fixes it.
##PrimaryKeyConstraint('time', 'rel1_id', 'rel2_id'),

Column('data', Unicode),
)

# Mapped classes
class Hub(object): pass
class Rel1(object): pass
class Rel2(object): pass

# mappers
mapper(Rel1, rel1)
mapper(Rel2, rel2)
mapper(Hub, hub, 
# A PK must be faked in the mapper to be able to map it (some rows are 
# missed due to duped pk when retrieving with the mapper but it's not much
# of a problem since the schema is mostly queried without the ORM for data
# analysisi).
primary_key = [hub.c.time, hub.c.rel1_id, hub.c.rel2_id],
properties = dict(
#XXX: Making the relations lazy fixes it too.
rel1 = relation(Rel1, lazy=False),
rel2 = relation(Rel2, lazy=False),
)
)

def run_test():
meta.create_all()
sess = create_session(bind_to=engine)
# No limit, no problem
sess.query(Hub).select()
# Bang!
sess.query(Hub).select(limit=100)

if __name__ == '__main__':
run_test()


[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread jason kirtland

Arun Kumar PG wrote:
 All tables are having a charset of utf8. Additionally, I am issuing SET 
 NAMES 'utf8' statement as a part of connection establishment.
 
 Anything that is wrong here or missing ?

Are you updating the connection character set in the driver as well 
after issuing SET NAMES?  It sounds like you might be out of sync and 
the driver is still trying to convert unicode with its default character 
set.

It'd be something like dbapi_con.set_character_set('utf8'), and I'm 
pretty sure that will also issue a basic SET NAMES for you behind the 
scenes.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Tuesday 14 August 2007 23:05:44 Michael Bayer wrote:
 On Aug 14, 2007, at 3:30 PM, [EMAIL PROTECTED] wrote:
  databases/sqlite: (reflecttable)
pragma_names is missing the BOOLEAN word/type - nulltype
 
  btw why isn't each dialect-typeclass adding it's own entry to
  that pragma_names, respectively to the colspecs ?
  Or, each class to have those pragmaword and basetype, and the
  dicts to be made by walking locals() if issubclass(..) ?
 
  Anyway, these dicts (the grammar) should be automaticaly built
  from available typeclasses...

 patches welcome

 
here 2 versions. 
One is simple, walking the module.namespace for 
issubclass(TypeEngine), expecting to find .pragma and .colspec in 
that class and collects them. The .colspec can probably be figured 
out from __bases__ (as in other version)

pre
def _issubclass( obj, klas):
'fail/fool-proof issubclass() - works with ANY argument'
from types import ClassType
return isinstance(obj,(type,ClassType)) and issubclass(obj,klas)

def collect_colspecs( namespace):   #this can be moved out of here
colspecs = {}
pragma_names = {}
for obj in namespace.itervalues():
if _issubclass( kl, sqlalchemy.TypeEngine):
c = getattr( kl, 'colspec', None) #or 'basetype'
p = getattr( kl, 'pragma', None)  #or 'sqltype' or rawtype
if c and p:
colspec[c]=kl
pragma_names[c]=kl
return colspecs, pragma_names

class SLNumeric(sqltypes.Numeric):
colspec,pragma = sqltypes.Numeric, 'NUMERIC'
def get_col_spec(self):
if self.precision is None:
return NUMERIC
else:
return NUMERIC(%(precision)s, %(length)s)%self.__dict__

class SLInteger(sqltypes.Integer):
colspec,pragma = sqltypes.Integer, 'INTEGER'
def get_col_spec(self): return self.pragma
...
colspecs, pragma_names = collect_colspecs( locals() )

/pre



the other one uses metaclass, and .pragma is set up, and guesses 
colspec's abstract_type from __bases.

pre
class MetaDialectType( type):   #this can be moved out of here
def __new__( metacls, name, bases, dict_):
#find top-most abstract_type base 
abstract_type = None
for b in bases:
#XXX is walk in depth needed? 
#e.g. if allowed class SLInt2( SLInteger):...
if issubclass( b, sqltypes.TypeEngine):
abstract_type = b
break
assert abstract_type, 'class %s: cannot find any abstract \
base type; do inherit from some sqlalchemy type' % name

try:
pragma = dict_['pragma']
except KeyError:
assert 0, 'class %s: cannot find any pragma' % name

klas = type.__new__( metacls, name, bases, dict_)
metacls.colspecs[ abstract_type] = klas
metacls.pragma_names[ pragma]=klas
return klas

class SLMetaDialectType( MetaDialectType):
colspecs = {}
pragma_names = {}

class SLNumeric( sqltypes.Numeric):
__metaclass__ = SLMetaDialectType
pragma = 'NUMERIC'
def get_col_spec(self):
r = self.pragma
if self.precision is not None:
r += (%(precision)s, %(length)s) % self.__dict__
return r

class SLInteger( sqltypes.Integer):
__metaclass__ = SLMetaDialectType
pragma = 'INTEGER'
def get_col_spec(self): return self.pragma

...
colspecs = SLMetaDialectType.colspecs
pragma_names = SLMetaDialectType.pragma_names 
/pre

==
There are 2 choices to make: 
 - walk locals() vs using metaclass
 - whether to touch get_col_spec()s 

i wanted to have everything specified only once.
Therefore the get_col_spec() redefinition. 
It can be:
 1 left as is, just adding a separate .pragma (no gain, 
consistency-wise, e.g. VARCHR in one place and VARCHAR in another)
 2 remade to use the self.pragma where equivalent (like 80% of 
places) - a lot of same code repeated
 3 put a default one in some base class for all dialect-types, e.g. 
DialectBaseType, which can be then used for filtering locals() or to 
bring metaclass
 4 created in the metaclass unless explicitly specified - this is most 
obscure.

btw i suggest some namechanges, colspec - abstract_type and 
pragma_name - rawdb_type; or something alike.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange behaviour when mapping tables without a real PK

2007-08-15 Thread Michael Bayer
thats definitely not going to work right now.  I added feature  
enhancement # 740, currently its milestone 5 unless we start getting  
more time for some of these things.  put primary_key on your Table  
for now.

On Aug 15, 2007, at 12:04 PM, Alberto Valverde wrote:

 from sqlalchemy import *

 meta = MetaData()
 engine = create_engine('sqlite:///:memory:', echo=True)
 meta.bind = engine

 # Tables
 rel1 = Table('rel1', meta,
 Column('id', Integer, primary_key=True),
 Column('data', Unicode),
 )

 rel2 = Table('rel2', meta,
 Column('id', Integer, primary_key=True),
 Column('data', Unicode),
 )


 hub = Table('hub', meta,
 Column('time', DateTime, nullable=False),
 Column('rel1_id', Integer, ForeignKey('rel1.id'), nullable=False),
 Column('rel2_id', Integer, ForeignKey('rel2.id'), nullable=False),

 # Mapped schema doesn't use a primary key since it's a hub table
 # in a fact system and time doesn't have enough resolution to  
 guarantee
 # uniqueness.
 #XXX Uncommenting the following line fixes it.
 ##PrimaryKeyConstraint('time', 'rel1_id', 'rel2_id'),

 Column('data', Unicode),
 )

 # Mapped classes
 class Hub(object): pass
 class Rel1(object): pass
 class Rel2(object): pass

 # mappers
 mapper(Rel1, rel1)
 mapper(Rel2, rel2)
 mapper(Hub, hub,
 # A PK must be faked in the mapper to be able to map it (some  
 rows are
 # missed due to duped pk when retrieving with the mapper but  
 it's not much
 # of a problem since the schema is mostly queried without the  
 ORM for data
 # analysisi).
 primary_key = [hub.c.time, hub.c.rel1_id, hub.c.rel2_id],
 properties = dict(
 #XXX: Making the relations lazy fixes it too.
 rel1 = relation(Rel1, lazy=False),
 rel2 = relation(Rel2, lazy=False),
 )
 )

 def run_test():
 meta.create_all()
 sess = create_session(bind_to=engine)
 # No limit, no problem
 sess.query(Hub).select()
 # Bang!
 sess.query(Hub).select(limit=100)

 if __name__ == '__main__':
 run_test()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Michael Bayer
are you using convert_unicode=True and/or the Unicode type ?

On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:

 All tables are having a charset of utf8. Additionally, I am issuing  
 SET NAMES 'utf8' statement as a part of connection establishment.

 Anything that is wrong here or missing ?

 On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:

 Arun Kumar PG wrote:
  Hi All,
 
  Recently I upgraded to the version 3.9 of SA. Post that whenever  
 I am
  trying to save characters in different language in the table I am
  getting the below exception:
 
   File /src/sqlalchemy/engine/base.py,
  line 601, in _execute
 raise exceptions.SQLError(context.statement,  
 context.parameters, e)
  SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte  
 0xc3 in
  position 1: ordinal not in range(128)
 
  I am wondering why it is using ascii codec instead of unicode ?
 
  FYI: I am using MySQL 4.1 and the charset of table is utf-8.

 Odd to see ascii there instead of latin1.  Is your database configured
 for utf-8 client connections?





 -- 
 Cheers,

 - A
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange behaviour when mapping tables without a real PK

2007-08-15 Thread Alberto Valverde


On Aug 15, 2007, at 6:55 PM, Michael Bayer wrote:

 thats definitely not going to work right now. I added feature  
 enhancement # 740, currently its milestone 5 unless we start  
 getting more time for some of these things. put primary_key on your  
 Table for now.

I've disabled the eager loading of related objects (lazy=False) for  
now which apparently seem to work around the lack of pk in Table  
(and, surprisingly,  results faster selects)...

If this doesn't work out well I'll probably cheat SA and declare a  
PK constraint but remove it manually on the real table. Unless I can  
think of something better...

Thanks for looking into it.

Alberto

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange behaviour when mapping tables without a real PK

2007-08-15 Thread Michael Bayer


On Aug 15, 2007, at 1:35 PM, Alberto Valverde wrote:



 On Aug 15, 2007, at 6:55 PM, Michael Bayer wrote:

 thats definitely not going to work right now. I added feature
 enhancement # 740, currently its milestone 5 unless we start
 getting more time for some of these things. put primary_key on your
 Table for now.

 I've disabled the eager loading of related objects (lazy=False) for
 now which apparently seem to work around the lack of pk in Table
 (and, surprisingly,  results faster selects)...

i can see how a lazy load might work.  but a persist operation, does  
that work too ?

(of course im just avoiding trying it out right now but time's a  
little short today)




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Yes. it's being done. I create the engine and then set convert unicode =
True.

On 8/15/07, Michael Bayer [EMAIL PROTECTED] wrote:

 are you using convert_unicode=True and/or the Unicode type ?
 On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:

 All tables are having a charset of utf8. Additionally, I am issuing SET
 NAMES 'utf8' statement as a part of connection establishment.

 Anything that is wrong here or missing ?

 On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
 
 
  Arun Kumar PG wrote:
   Hi All,
  
   Recently I upgraded to the version 3.9 of SA. Post that whenever I am
   trying to save characters in different language in the table I am
   getting the below exception:
  
   File /src/sqlalchemy/engine/base.py,
   line 601, in _execute
   raise exceptions.SQLError(context.statement, context.parameters, e)
   SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3 in
 
   position 1: ordinal not in range(128)
  
   I am wondering why it is using ascii codec instead of unicode ?
  
   FYI: I am using MySQL 4.1 and the charset of table is utf-8.
 
  Odd to see ascii there instead of latin1. Is your database configured
  for utf-8 client connections?
 
 
 


 --
 Cheers,

 - A




 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
Any other clue that may be helpful in troubleshooting the cause ?

On 8/15/07, Arun Kumar PG [EMAIL PROTECTED] wrote:

 Yes. it's being done. I create the engine and then set convert unicode =
 True.

 On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:
 
  are you using convert_unicode=True and/or the Unicode type ?
  On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:
 
  All tables are having a charset of utf8. Additionally, I am issuing SET
  NAMES 'utf8' statement as a part of connection establishment.
 
  Anything that is wrong here or missing ?
 
  On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
  
  
   Arun Kumar PG wrote:
Hi All,
   
Recently I upgraded to the version 3.9 of SA. Post that whenever I
   am
trying to save characters in different language in the table I am
getting the below exception:
   
File /src/sqlalchemy/engine/base.py,
line 601, in _execute
raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3
   in
position 1: ordinal not in range(128)
   
I am wondering why it is using ascii codec instead of unicode ?
   
FYI: I am using MySQL 4.1 and the charset of table is utf-8.
  
   Odd to see ascii there instead of latin1. Is your database configured
   for utf-8 client connections?
  
  
  
 
 
  --
  Cheers,
 
  - A
 
 
 
 
   
 


 --
 Cheers,

 - A




-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread Michael Bayer

I had in mind that the metaclass approach would be used, but not  
necesarily with the walking stuff going on.

if you really want to think about this, the idea for a types overhaul  
is ticket #526.  that breaks up the DDL from the adaptation side  
of things.  a metaclass approach would be at the base of it  
controlling a registry of information about types.


On Aug 15, 2007, at 12:15 PM, [EMAIL PROTECTED] wrote:


 On Tuesday 14 August 2007 23:05:44 Michael Bayer wrote:
 On Aug 14, 2007, at 3:30 PM, [EMAIL PROTECTED] wrote:
 databases/sqlite: (reflecttable)
   pragma_names is missing the BOOLEAN word/type - nulltype

 btw why isn't each dialect-typeclass adding it's own entry to
 that pragma_names, respectively to the colspecs ?
 Or, each class to have those pragmaword and basetype, and the
 dicts to be made by walking locals() if issubclass(..) ?

 Anyway, these dicts (the grammar) should be automaticaly built
 from available typeclasses...

 patches welcome


 here 2 versions.
 One is simple, walking the module.namespace for
 issubclass(TypeEngine), expecting to find .pragma and .colspec in
 that class and collects them. The .colspec can probably be figured
 out from __bases__ (as in other version)

 pre
 def _issubclass( obj, klas):
 'fail/fool-proof issubclass() - works with ANY argument'
 from types import ClassType
 return isinstance(obj,(type,ClassType)) and issubclass(obj,klas)

 def collect_colspecs( namespace): #this can be moved out of here
 colspecs = {}
 pragma_names = {}
 for obj in namespace.itervalues():
 if _issubclass( kl, sqlalchemy.TypeEngine):
 c = getattr( kl, 'colspec', None) #or 'basetype'
 p = getattr( kl, 'pragma', None)  #or 'sqltype' or rawtype
 if c and p:
 colspec[c]=kl
 pragma_names[c]=kl
 return colspecs, pragma_names

 class SLNumeric(sqltypes.Numeric):
 colspec,pragma = sqltypes.Numeric, 'NUMERIC'
 def get_col_spec(self):
 if self.precision is None:
 return NUMERIC
 else:
 return NUMERIC(%(precision)s, %(length)s)%self.__dict__

 class SLInteger(sqltypes.Integer):
 colspec,pragma = sqltypes.Integer, 'INTEGER'
 def get_col_spec(self): return self.pragma
 ...
 colspecs, pragma_names = collect_colspecs( locals() )

 /pre

 

 the other one uses metaclass, and .pragma is set up, and guesses
 colspec's abstract_type from __bases.

 pre
 class MetaDialectType( type): #this can be moved out of here
 def __new__( metacls, name, bases, dict_):
 #find top-most abstract_type base
 abstract_type = None
 for b in bases:
 #XXX is walk in depth needed?
 #e.g. if allowed class SLInt2( SLInteger):...
 if issubclass( b, sqltypes.TypeEngine):
 abstract_type = b
 break
 assert abstract_type, 'class %s: cannot find any abstract \
 base type; do inherit from some sqlalchemy type' % name

 try:
 pragma = dict_['pragma']
 except KeyError:
 assert 0, 'class %s: cannot find any pragma' % name

 klas = type.__new__( metacls, name, bases, dict_)
 metacls.colspecs[ abstract_type] = klas
 metacls.pragma_names[ pragma]=klas
 return klas

 class SLMetaDialectType( MetaDialectType):
 colspecs = {}
 pragma_names = {}

 class SLNumeric( sqltypes.Numeric):
 __metaclass__ = SLMetaDialectType
 pragma = 'NUMERIC'
 def get_col_spec(self):
 r = self.pragma
 if self.precision is not None:
 r += (%(precision)s, %(length)s) % self.__dict__
 return r

 class SLInteger( sqltypes.Integer):
 __metaclass__ = SLMetaDialectType
 pragma = 'INTEGER'
 def get_col_spec(self): return self.pragma

 ...
 colspecs = SLMetaDialectType.colspecs
 pragma_names = SLMetaDialectType.pragma_names
 /pre

 ==
 There are 2 choices to make:
  - walk locals() vs using metaclass
  - whether to touch get_col_spec()s

 i wanted to have everything specified only once.
 Therefore the get_col_spec() redefinition.
 It can be:
  1 left as is, just adding a separate .pragma (no gain,
 consistency-wise, e.g. VARCHR in one place and VARCHAR in another)
  2 remade to use the self.pragma where equivalent (like 80% of
 places) - a lot of same code repeated
  3 put a default one in some base class for all dialect-types, e.g.
 DialectBaseType, which can be then used for filtering locals() or to
 bring metaclass
  4 created in the metaclass unless explicitly specified - this is most
 obscure.

 btw i suggest some namechanges, colspec - abstract_type and
 pragma_name - rawdb_type; or something alike.

 ciao
 svil

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 

[sqlalchemy] Re: Problem with recursive relationship and lazy=False

2007-08-15 Thread Michael Bayer

hi Jeronimo -

eager loading of self-referential relationships is not supported in  
version 0.3.  this is a new feature as of version 0.4, using the  
join_depth option to indicate how many levels deep youd like to  
eagerly-join.  in any case, even when eager loading is specified, the  
load should fall back to a lazyload if eager loading could not occur.

However, assigning two different classes to the same table, without  
any inheritance identifiers, is also something that has never been  
done before, so additional testing may be needed ensure that works.   
But I would ask if its really necessary to map two different classes  
like that  ?  how do they differ ?

Trying your test with 0.4, and using the join_depth option as  
described at http://www.sqlalchemy.org/docs/04/ 
mappers.html#advdatamapping_relation_selfreferential_eagerloading ,  
is worth a try.  If you can use just a single class, that will  
definitely work.  For a description of how self-referential loading  
normally works with 0.4 and (also pretty much with 0.3 as well), see  
http://www.sqlalchemy.org/docs/04/ 
mappers.html#advdatamapping_relation_selfreferential .

hope this helps,

- mike

On Aug 15, 2007, at 12:54 PM, Jeronimo wrote:


 Greetengs !!
 I'm having trouble getting a list from a recursive relationship. The
 relation is between NodeA and NodeB, where NodeA whould be the parent
 and NodeB the child.
 If the recursive relatioship is lazy the list is loaded correctly when
 the property is requested, but when child elements need to be loaded
 eagerly the list never loads, even using eagerload option.

 ---
 Test Case:

 from sqlalchemy import *
 from sqlalchemy.ext.assignmapper import assign_mapper
 from sqlalchemy.ext.sessioncontext import SessionContext
 from sqlalchemy.ext.selectresults import SelectResults

 ctx = SessionContext(create_session)
 session = ctx.current
 metadata = MetaData()

 table = Table(node_table, metadata,
 Column('id', Integer, primary_key=True),
 Column('number', Unicode(10), nullable=False),
 Column('parent_id', Integer, ForeignKey('node_table.id'))
 )

 class NodeA(object): pass
 class NodeB(object): pass

 assign_mapper(ctx, NodeB, table)
 # With lazy=True here the list IS loaded when needed
 assign_mapper(ctx, NodeA, table, properties={'b_childs':
 relation(NodeB, lazy=False)})

 metadata.bind = create_engine('sqlite://', echo=True)
 metadata.create_all()

 table.insert().execute(number=1)
 table.insert().execute(number=2)
 table.insert().execute(number=3, parent_id=1)
 table.insert().execute(number=4, parent_id=1)
 table.insert().execute(number=5, parent_id=2)

 a = SelectResults(session.query(NodeA)).filter(NodeA.c.id==1).list()
 [0]
 # If recursive relationship is lazy b child nodes are loaded, but if
 relationship is not lazy they are never loaded
 # even when using eagerload option
 print a.b_childs

 Does anyone knows what's the problem here ? I'm using sqlalchemy
 version 0.3.9.
 Regards !


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 19:51:30 Michael Bayer wrote:
 On Aug 15, 2007, at 10:52 AM, [EMAIL PROTECTED] wrote:
  Second, i went to r3312, let init_attr() set a _state as plain
  dict and removed _state as property. The difference
  plain-dict/property (in favor of plain dict) is like 2-3%.
 
  property:
  loaded  10  items each with  500 subitems
  523034 function calls (512957 primitive calls) in 2.556 CPU s
  loaded  30  items each with  500 subitems
  1564374 function calls (1534297 primitive calls) in 7.796 CPU s
  loaded  16  items each with  1500 subitems
  2499436 function calls (2451359 primitive calls) in 12.518 CPU s
 
  plain dict:
  loaded  10  items each with  500 subitems
  513014 function calls (502937 primitive calls) in 2.525 CPU s
  loaded  30  items each with  500 subitems
  1534314 function calls (1504237 primitive calls) in 7.623 CPU s
  loaded  16  items each with  1500 subitems
  2451404 function calls (2403327 primitive calls) in 12.196 CPU s
 
  up to you

 great, post a patch for that on trac.
#741, using _sa_attr_state, set-up in mapper._create_instance()

btw (if i havent asked yet) - do u have a place describing all things 
that happen to some user's object/class once given into SA's arms? 
e.g. all the attributes that grow on the instances (and has to be 
avoided by user), replacement of __init__, all the descriptors for 
attributes, etc?
i can prepare something like half a page on this theme, but u'll have 
to fill up as i surely have no much idea of which is what/why.
Title like impact on user's object and class

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Andrew Stromnov

Hi,

I have two questions:

  1) what exact versions of MySQL (i.e. 4.1.22) and py-mysqldb?

  2) how many databases and/or tables (an encodings) used in your
application?

On Aug 15, 9:57 pm, Arun Kumar PG [EMAIL PROTECTED] wrote:
 Any other clue that may be helpful in troubleshooting the cause ?

 On 8/15/07, Arun Kumar PG [EMAIL PROTECTED] wrote:





  Yes. it's being done. I create the engine and then set convert unicode =
  True.

  On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:

   are you using convert_unicode=True and/or the Unicode type ?
   On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:

   All tables are having a charset of utf8. Additionally, I am issuing SET
   NAMES 'utf8' statement as a part of connection establishment.

   Anything that is wrong here or missing ?

   On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:

Arun Kumar PG wrote:
 Hi All,

 Recently I upgraded to the version 3.9 of SA. Post that whenever I
am
 trying to save characters in different language in the table I am
 getting the below exception:

 File /src/sqlalchemy/engine/base.py,
 line 601, in _execute
 raise exceptions.SQLError(context.statement, context.parameters, e)
 SQLError: (UnicodeDecodeError) 'ascii' codec can't decode byte 0xc3
in
 position 1: ordinal not in range(128)

 I am wondering why it is using ascii codec instead of unicode ?

 FYI: I am using MySQL 4.1 and the charset of table is utf-8.

Odd to see ascii there instead of latin1. Is your database configured
for utf-8 client connections?

   --
   Cheers,

   - A

  --
  Cheers,

  - A

 --
 Cheers,

 - A


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread jason kirtland

So to recap, you are setting the character set on the dbapi 
connection via the MySQLdb method in addition to issuing a manual 
SET NAMES query?


Arun wrote:
 Any other clue that may be helpful in troubleshooting the cause ?


 On 8/15/07, Arun Kumar PG [EMAIL PROTECTED]  wrote:

 Yes. it's being done. I create the engine and then set convert
 unicode = True.



 On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:


 are you using convert_unicode=True and/or the Unicode type ?



 On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:

 All tables are having a charset of utf8. Additionally, I am
 issuing SET NAMES 'utf8' statement as a part of connection
 establishment.

 Anything that is wrong here or missing ?


 On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:


 Arun Kumar PG wrote:
 Hi All,

 Recently I upgraded to the version 3.9 of SA. Post that whenever
 I am trying to save characters in different language in the
 table I am getting the below exception:

 File /src/sqlalchemy/engine/base.py,
 line 601, in _execute
 raise exceptions.SQLError(context.statement, context.parameters,
 e) SQLError: (UnicodeDecodeError) 'ascii' codec can't decode
 byte 0xc3 in  position 1: ordinal not in range(128)

 I am wondering why it is using ascii codec instead of unicode ?

 FYI: I am using MySQL 4.1 and the charset of table is utf-8.

 Odd to see ascii there instead of latin1. Is your database
 configured
 for utf-8 client connections?






 --
 Cheers,

 - A










 --
 Cheers,

 - A



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2007-08-15 Thread Paul Johnston

Hi,

 The more I think about this, the more I'm becoming convinced that 
 specifying an implicit default schema in all generated SQL is a pretty 
 bad idea. The reason is that it would break a feature in a few 
 database engines that might be called schema cascade, or Postgres 
 explicitly calls the schema path.

I agree, I think if you don't specify a schema you don't get one is a 
sensible approach. I think this already holds through most of SA though, 
there's just a few odd places, and I wonder if drop table is one of 
them, resulting in the original cause of this thread.

As Mike pointed out, there are some places, e.g. table reflection, where 
you have to specify a schema. Well, we can't do anything about that, 
default_schema seems sensible there.

I think MSSQL's set_default_schema is a bad idea from the start. People 
expect it to work as a shortcut for specifying a schema on every table, 
and it doesn't work like that. I think we should remove it for now. If 
people do really need such a feature, it should be a cross-db feature, 
supported by unit tests.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem with recursive relationship and lazy=False

2007-08-15 Thread Jeronimo

Right now i'm considering changing to 0.4 .

Assign 2 different classes to the same table was the first thing that
i thought, because A and B are
exactly the same, the only difference is that only one kind of A
( the B class :) ) can have a parent A but no children.
The A class can have many B as children.

Thank you very much Michael !!

Best Regards,
Jeronimo



On Aug 15, 5:47 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 hi Jeronimo -

 eager loading of self-referential relationships is not supported in
 version 0.3.  this is a new feature as of version 0.4, using the
 join_depth option to indicate how many levels deep youd like to
 eagerly-join.  in any case, even when eager loading is specified, the
 load should fall back to a lazyload if eager loading could not occur.

 However, assigning two different classes to the same table, without
 any inheritance identifiers, is also something that has never been
 done before, so additional testing may be needed ensure that works.
 But I would ask if its really necessary to map two different classes
 like that  ?  how do they differ ?

 Trying your test with 0.4, and using the join_depth option as
 described athttp://www.sqlalchemy.org/docs/04/
 mappers.html#advdatamapping_relation_selfreferential_eagerloading ,
 is worth a try.  If you can use just a single class, that will
 definitely work.  For a description of how self-referential loading
 normally works with 0.4 and (also pretty much with 0.3 as well), see  
 http://www.sqlalchemy.org/docs/04/
 mappers.html#advdatamapping_relation_selfreferential .

 hope this helps,

 - mike

 On Aug 15, 2007, at 12:54 PM, Jeronimo wrote:



  Greetengs !!
  I'm having trouble getting a list from a recursive relationship. The
  relation is between NodeA and NodeB, where NodeA whould be the parent
  and NodeB the child.
  If the recursive relatioship is lazy the list is loaded correctly when
  the property is requested, but when child elements need to be loaded
  eagerly the list never loads, even using eagerload option.

  ---
  Test Case:

  from sqlalchemy import *
  from sqlalchemy.ext.assignmapper import assign_mapper
  from sqlalchemy.ext.sessioncontext import SessionContext
  from sqlalchemy.ext.selectresults import SelectResults

  ctx = SessionContext(create_session)
  session = ctx.current
  metadata = MetaData()

  table = Table(node_table, metadata,
  Column('id', Integer, primary_key=True),
  Column('number', Unicode(10), nullable=False),
  Column('parent_id', Integer, ForeignKey('node_table.id'))
  )

  class NodeA(object): pass
  class NodeB(object): pass

  assign_mapper(ctx, NodeB, table)
  # With lazy=True here the list IS loaded when needed
  assign_mapper(ctx, NodeA, table, properties={'b_childs':
  relation(NodeB, lazy=False)})

  metadata.bind = create_engine('sqlite://', echo=True)
  metadata.create_all()

  table.insert().execute(number=1)
  table.insert().execute(number=2)
  table.insert().execute(number=3, parent_id=1)
  table.insert().execute(number=4, parent_id=1)
  table.insert().execute(number=5, parent_id=2)

  a = SelectResults(session.query(NodeA)).filter(NodeA.c.id==1).list()
  [0]
  # If recursive relationship is lazy b child nodes are loaded, but if
  relationship is not lazy they are never loaded
  # even when using eagerload option
  print a.b_childs

  Does anyone knows what's the problem here ? I'm using sqlalchemy
  version 0.3.9.
  Regards !


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange behaviour when mapping tables without a real PK

2007-08-15 Thread Alberto Valverde


On Aug 15, 2007, at 7:48 PM, Michael Bayer wrote:



 On Aug 15, 2007, at 1:35 PM, Alberto Valverde wrote:



 On Aug 15, 2007, at 6:55 PM, Michael Bayer wrote:

 thats definitely not going to work right now. I added feature
 enhancement # 740, currently its milestone 5 unless we start
 getting more time for some of these things. put primary_key on your
 Table for now.

 I've disabled the eager loading of related objects (lazy=False) for
 now which apparently seem to work around the lack of pk in Table
 (and, surprisingly,  results faster selects)...

 i can see how a lazy load might work.  but a persist operation, does
 that work too ?

I populate data into the hub table (through the Hub class),  
normalizing it on the way in (creating new spokes if neccesary) and  
hasn't given any trouble. I had never updated any Hub instance  
though... but I've just tried it and changes are persisted with no  
problem.

In case it sheds a light, I've noticed that the query generated with  
eager loading and a real PK looks something like:

SELECT all_cols_and_related_cols
FROM
 (SELECT hub.pk1, ..., hub.pkn, hub.oid FROM hub LIMIT limit) AS  
tbl_row_count,
 hub LEFT OUTER JOIN rel1, 
WHERE hub.pk1 = tbl_row_count.pk1 AND  AND hub.pkn =  
tbl_row_count.pkn
ORDER BY tbl_row_count.oid

When no real pk is present, query looks like:

SELECT all_cols_and_related_cols
FROM hub LEFT OUTER JOIN rel1 AS anon_1, ...
ORDER BY tb_row_count.oid, anon_1.oid, , anon_n.oid.


Alberto

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread Arun Kumar PG
I am using mysqldb-1.2.2. I am passing 'SET NAMES' to connect method as a
value for init_command parameter. All tables have utf8 charset. And I pass
convert_unicode=True to engine.

Let me know if anything else is required.

thanks!

On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:


 So to recap, you are setting the character set on the dbapi
 connection via the MySQLdb method in addition to issuing a manual
 SET NAMES query?


 Arun wrote:
  Any other clue that may be helpful in troubleshooting the cause ?
 
 
  On 8/15/07, Arun Kumar PG [EMAIL PROTECTED]  wrote:
 
  Yes. it's being done. I create the engine and then set convert
  unicode = True.
 
 
 
  On 8/15/07, Michael Bayer  [EMAIL PROTECTED] wrote:
 
 
  are you using convert_unicode=True and/or the Unicode type ?
 
 
 
  On Aug 15, 2007, at 12:01 PM, Arun Kumar PG wrote:
 
  All tables are having a charset of utf8. Additionally, I am
  issuing SET NAMES 'utf8' statement as a part of connection
  establishment.
 
  Anything that is wrong here or missing ?
 
 
  On 8/15/07, jason kirtland [EMAIL PROTECTED] wrote:
 
 
  Arun Kumar PG wrote:
  Hi All,
 
  Recently I upgraded to the version 3.9 of SA. Post that whenever
  I am trying to save characters in different language in the
  table I am getting the below exception:
 
  File /src/sqlalchemy/engine/base.py,
  line 601, in _execute
  raise exceptions.SQLError(context.statement, context.parameters,
  e) SQLError: (UnicodeDecodeError) 'ascii' codec can't decode
  byte 0xc3 in  position 1: ordinal not in range(128)
 
  I am wondering why it is using ascii codec instead of unicode ?
 
  FYI: I am using MySQL 4.1 and the charset of table is utf-8.
 
  Odd to see ascii there instead of latin1. Is your database
  configured
  for utf-8 client connections?
 
 
 
 
 
 
  --
  Cheers,
 
  - A
 
 
 
 
 
 
 
 
 
 
  --
  Cheers,
 
  - A



 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ascii codec instead of unicode ?

2007-08-15 Thread jason kirtland

Ok you need to get tArun wrote:
 I am using mysqldb-1.2.2. I am passing 'SET NAMES' to connect
 method as a value for init_command parameter. All tables have
 utf8 charset. And I pass convert_unicode=True to engine.

 Let me know if anything else is required.

Ok, you need to get that charset to the driver.  Try removing SET 
NAMES from your init_command, and instead pass charset=utf8 and 
use_unicode=0 in your database connection URL.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 20:54:27 Michael Bayer wrote:
 I had in mind that the metaclass approach would be used, but not
 necesarily with the walking stuff going on.
the walking is a quick and dirty and very simple way to get away with 
it - for now.

 if you really want to think about this, the idea for a types
 overhaul is ticket #526.  that breaks up the DDL from the
 adaptation side of things.  a metaclass approach would be at the
 base of it controlling a registry of information about types.

types... my static_type lib is just about types, and all the metainfo 
u could hang on them...

lets see. u have multiple types (and conversions) here, for same item: 
 a) the SA-abstract type in schema - eg. types.String
 b) the specific dialect implementation of a), e.g. sqlite.SLText
 c) the python type that is expected to live between SAdialect and 
dbapi, both ways
 d) the actual SQL server name/representation for the type
 e) python type that is expected to come out from SA of when loading, 
eg. unicode
 f) python type that is expected to go into SA when assigning. eg. 
str/utf8
 g) validation for e) (to SA)
  - so x.myint = 'qq' fails (or x.mystr=12 succeeds)
 h) validation for f) (from SA)
  - so attempting to load x.myint from column containing 'qq' fails

my experience says conversion and validation is same ONE thing, 
implementation-wise. u make one of these notions and use it to 
implement both, e.g. i have static_type.validators which i use for 
conversion and/or validation.

The g,h conversions/validations are outside of sql-related-scope, they 
are only application-side related. i guess they will be mostly 
user-specified, with very few automatical.

For example, in dbcook i set up another layer of type-translation on 
top of SA, exactly to address this issue. So application types stay 
applicational, and match to whatever (independently) at persistency  
level (SA). Thus i can use same model file, with same names, once 
having the (my) types as heavy-checking statictypes, and once as 
empty classes (plainwrap.py) used just to match the SA-type 
underneath.

Each dialect keeps its own registry of types, used for bidirectional 
abstract-dialect match, plus the actual sql (pragma) stuff like 
typenames/args and retrieving them back for reflection. This 
coresponds to current colspecs and pragma_names dicts. Reflection 
should be configurable whether to stop at dialect level (SLint) or go 
back to abstract types (types.Int) - see my autoload.py.

are there any cases of diality? e.g. str1 and str2 both being str?

current situation: IMO right now typewise all is ok, but u have one 
conversion only, happening inside the dialect, implicitly, cannot be 
extended/chained; and registries are separated and inconsistent and 
hard to find.

you wanted also specific converters as per dialect, e.g.
 python.str(-SA.date-SA.sqlite.SLdate)-sqlite.datetime 
might be different from
 python.str(-SA.date-SA.postgress.PGdate)-psycopg.datetime

so, do u want the Convertors/validators network to follow the way of 
Types network? i.e. abstract convertors and (actual) 
dialect-implementations? in another registry?

Are there going to be priorities/ordering in (auto) matching of 
convertors? e.g. u may have str2int and str2anynumber, which to 
choose...

btw. chaining converters changes/pushes expected (i/o) pythontype. The 
question what is the expected in/out pythontype? should ask the 
first converter in the chain from user-side, and result can be 
singular or multiple, e.g. anything to str, or (bool,int,long,float) 
to float).

am i missing something so far?

  btw why isn't each dialect-typeclass adding it's own entry to
  that pragma_names, respectively to the colspecs ?
  Anyway, these dicts (the grammar) should be automaticaly
  built from available typeclasses...
 
  patches welcome
 
  here 2 versions.
  One is simple, walking the module.namespace for
  issubclass(TypeEngine), expecting to find .pragma and .colspec in
  that class and collects them. The .colspec can probably be
  figured out from __bases__ (as in other version)
 
 
  the other one uses metaclass, and .pragma is set up, and guesses
  colspec's abstract_type from __bases.
 
  ==
  There are 2 choices to make:
   - walk locals() vs using metaclass
   - whether to touch get_col_spec()s
 
  i wanted to have everything specified only once.
  Therefore the get_col_spec() redefinition.
  It can be:
   1 left as is, just adding a separate .pragma (no gain,
  consistency-wise, e.g. VARCHR in one place and VARCHAR in
  another) 2 remade to use the self.pragma where equivalent (like
  80% of places) - a lot of same code repeated
   3 put a default one in some base class for all dialect-types,
  e.g. DialectBaseType, which can be then used for filtering
  locals() or to bring metaclass
   4 created in the metaclass unless explicitly specified - this is
  most obscure.
 
  btw i suggest some namechanges, colspec - abstract_type and
  pragma_name 

[sqlalchemy] sqlite unicode/datetime issue in 0.4

2007-08-15 Thread JP

I'm (finally) working on 0.4 compatibility in a couple of small
applications at work. One issue that is cropping up is that when using
sqlite, columns that are defined like this:

Column('established', TIMESTAMP(timezone=True), nullable=False,
  default=func.current_timestamp(type=TIMESTAMP)),

which work correctly in 0.3.10, in 0.4 cause this error:

  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
245, in pre_execution
self.pre_exec()
  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
258, in pre_exec
self.parameters =
self.__convert_compiled_params(self.compiled_parameters)
  File ./sqlalchemy_trunk/lib/sqlalchemy/engine/default.py, line
228, in __convert_compiled_params
parameters = parameters.get_raw_list(processors)
  File ./sqlalchemy_trunk/lib/sqlalchemy/sql.py, line 867, in
get_raw_list
return [
  File ./sqlalchemy_trunk/lib/sqlalchemy/databases/sqlite.py, line
41, in process
return value.strftime(self.__format__)
AttributeError: 'unicode' object has no attribute 'strftime'

Is there a different way to do this that's compatible with both
versions, or is there a new 0.4 way of doing this, or have I just been
doing something wrong all along and only 0.4 is catching it?

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

 Reflection should be configurable whether to stop at dialect 
 level (SLint) or go back to abstract types (types.Int) - see my
 autoload.py.
why would one want to stop the reflection from going back to abstract 
types?
i.e. if the current reflection (dialevt-level) is made to autoguess 
the abstract SA type, would this break something?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Thursday 16 August 2007 00:33:57 [EMAIL PROTECTED] wrote:
  Reflection should be configurable whether to stop at dialect
  level (SLint) or go back to abstract types (types.Int) - see my
  autoload.py.

 why would one want to stop the reflection from going back to
 abstract types?
 i.e. if the current reflection (dialevt-level) is made to autoguess
 the abstract SA type, would this break something?

Answering myself, dialects may have richer types that SA.abstract 
ones, in which case extra ones stay as is.

Also, seems there are duplicates, because of which the 
abstract-dialect is not 1:1, e.g. postgres has bigint and int that 
are both SA.integer. how about these?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2007-08-15 Thread Rick Morrison
 there's just a few odd places, and I wonder if drop table is one of
 them, resulting in the original cause of this thread.

I don't think that DROP is a special case. Look upthread. The incorrect DROP
happened in the same wrong schema as the incorrect CREATE. The problem is
that the check-table correctly looked in the *right* schema, and so didn't
find the table. If the default schema hadn't been monkeyed with along the
way, the behavior would be correct.

 I think MSSQL's set_default_schema is a bad idea from the start. People
 expect it to work as a shortcut for specifying a schema on every table,
 and it doesn't work like that. I think we should remove it for now.

I agree, it's been a fiddly pain in the neck since inception. Unless someone
yells pretty soon, consider it gone.

One thing left unresolved here is determining whether pyodbc is the culprit
in adding that spurious schema name. Would someone with a working pyodbc
verify that it is by trying some direct-text SQL against a pyodbc DBAPI
cursor? I would think that

   create table foo(id int)

would do the trick. Watch the SQL that actually gets sent over the wire, and
where the table ends up.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlite unicode/datetime issue in 0.4

2007-08-15 Thread Michael Bayer


On Aug 15, 2007, at 6:22 PM, JP wrote:


 Is there a different way to do this that's compatible with both
 versions, or is there a new 0.4 way of doing this, or have I just been
 doing something wrong all along and only 0.4 is catching it?


DateTime objects in sqlite currently expect a datetime.datetime  
object, and not a string.  previous versions accepted both.  a lot of  
people are getting this issue so i think im going to see what  
psycopg2 and mysqldb do, and emulate them.  historically ive been  
against hardcoding a particular string format.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mapping a joined table and ForeignKey functionality for session.save()

2007-08-15 Thread Boris DuĊĦek

Hi,

I am using sqlalchemy like this:

class Entry: pass
table1 = sqa.Table('table1', meta, autoload=True) # this table has
primary key 'id'
table2 = sqa.Table('table2', meta, sqa.Column('table1_id',
sqa.Integer, sqa.ForeignKey('table1.id'), autoload=True)
table1 = table1.join(table2) # gets joined on the ForeignKey, a.k.a.
table2.table1_id == table1.id
sqa_orm.mapper(Entry, table1)
session = sqa_orm.create_session()
entry = Entry()
entry.a = 1
... # working with entry, setting values for all columns except id and
table1_id
session.save(entry)
session.flush()

But now, the entry in table2 gets stored, but with NULL value in
table1_id column. I would expect sqlalchemy to be the same smart as it
is already with the join, and set table2.table1_id to the
corresponding table1.id, because it knows about the foreign key.

Any more direct way to achieve this than to use properties and
relations (which would require two classes (Entry1, Entry2) mapped to
each table separately as opposed to just Entry mapped to a table
join)?

Thanks for any suggestions and cheers,
Boris


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ondelete cascade with SA.

2007-08-15 Thread James

I just installed the latest 0.3 version of SA (0.3.10).

How does one tell SqlAlchemy to cascade on delete within the TG layer?

I use MySql with InnoDB and I have ondelete cascade working on the
tables. So, if I delete an account, the user records are correctly
deleted.

But with SA, deleting an account does not delete the user records.

I saw in SA docs a rough description of how to do ondelete cascade
using the passive_deletes=True arg in a mapper. But I'm not sure how
it applies through the TG layer.

I tried adding ondelete=CASCADE to the ForeignKey constructor call
in the account column (below) based on example here:
http://www.sqlalchemy.org/docs/03/adv_datamapping.html

This did not work. Child rows not deleted.
Any help much appreciated!
James

INFO:
I have accounts with one-to-many users in model.py:

class User(ActiveMapper):

Reasonably basic User definition. Probably would want additional
attributes.

class mapping:
__table__ = tg_user
user_id   = column(Integer, primary_key=True)
user_name = column(Unicode(16), unique=True)
email_address = column(Unicode(255), unique=True)
display_name  = column(Unicode(255))
last_name = column(Unicode(255))
first_name= column(Unicode(255))
password  = column(Unicode(40))
created   = column(DateTime, default=datetime.now)
nonce = column(Unicode(50))
   account   = column(Integer,
foreign_key=ForeignKey(account.id))
primary   = column(Integer)

class Account(ActiveMapper):
class mapping:
__table__ = account
id  = column(Integer, primary_key=True)
account_name= column(Unicode(50), unique=True)
number  = column(Unicode(50), unique=True)
gateway_cust_num= column(Unicode(50))
created = column(DateTime, default=datetime.now)
company = column(Unicode(50))
address = column(Unicode(50))
address2= column(Unicode(50))
city= column(Unicode(50))
state   = column(Unicode(20))
zip = column(Unicode(20))
phone   = column(Unicode(20))
cc_last_4   = column(Unicode(10))
users   = one_to_many('User')
invoices= one_to_many('Invoice')
plan_id = column(Integer)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ondelete cascade with turbogears/SA

2007-08-15 Thread James

I just installed the latest 0.3 version of SA (0.3.10).

How does one tell SqlAlchemy to cascade on delete within a TurboGears
context?

I use MySql with InnoDB and I have ondelete cascade working on the
tables. So, if I delete an account directly via sql, the user records
are correctly deleted.

But with SA, deleting an account does not delete the user records.

I saw in SA docs a rough description of how to do ondelete cascade
using the passive_deletes=True arg in a mapper. But I'm not sure how
it applies through the TG layer.

I tried adding ondelete=CASCADE to the ForeignKey constructor call
in the account column (below) based on example here:
http://www.sqlalchemy.org/docs/03/adv_datamapping.html

This did not work. Child rows not deleted.
Any help much appreciated!
James

INFO:
I have accounts with one-to-many users in model.py:

class User(ActiveMapper):

Reasonably basic User definition. Probably would want additional
attributes.

class mapping:
__table__ = tg_user
user_id   = column(Integer, primary_key=True)
user_name = column(Unicode(16), unique=True)
email_address = column(Unicode(255), unique=True)
display_name  = column(Unicode(255))
last_name = column(Unicode(255))
first_name= column(Unicode(255))
password  = column(Unicode(40))
created   = column(DateTime, default=datetime.now)
nonce = column(Unicode(50))
   account   = column(Integer,
foreign_key=ForeignKey(account.id))
primary   = column(Integer)

class Account(ActiveMapper):
class mapping:
__table__ = account
id  = column(Integer, primary_key=True)
account_name= column(Unicode(50), unique=True)
number  = column(Unicode(50), unique=True)
gateway_cust_num= column(Unicode(50))
created = column(DateTime, default=datetime.now)
company = column(Unicode(50))
address = column(Unicode(50))
address2= column(Unicode(50))
city= column(Unicode(50))
state   = column(Unicode(20))
zip = column(Unicode(20))
phone   = column(Unicode(20))
cc_last_4   = column(Unicode(10))
users   = one_to_many('User')
invoices= one_to_many('Invoice')
plan_id = column(Integer)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---