[sqlalchemy] Re: Composite Association Proxies - values of dict are not deleted in DB when keys are deleted

2018-10-28 Thread Sven
Indeed, in my case, it's smarter to put the keys and values in the same 
table.

I tried this and it seems to works fine:

from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.orm import relationship, backref, Session
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm.collections import attribute_mapped_collection


engine = create_engine('sqlite:///foo.db')
Base = declarative_base()


class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(64))


# the same 'user_keywords'->'keyword' proxy as in
# the basic dictionary example
keywords = association_proxy(
'user_keywords',
'keyword',
creator=lambda k, v:
UserKeyword(special_key=k, keyword=v)
)


def __init__(self, name):
self.name = name


class UserKeyword(Base):
__tablename__ = 'user_keyword'


id = Column(Integer, primary_key=True)
user_id = Column(ForeignKey('user.id'))
   
special_key = Column(String)
keyword = Column(String)
user = relationship(User, backref=backref(
"user_keywords",
collection_class=attribute_mapped_collection("special_key"),
cascade="all, delete-orphan"
)
)


Base.metadata.create_all(engine)
session = Session(engine)


john = User("john")
session.add(john)


john.keywords["k1"] = "v1"
john.keywords["k2"] = "v2"


session.commit()


del john.keywords["k1"]
del john.keywords["k2"]


session.commit()

Thank you!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Composite Association Proxies - values of dict are not deleted in DB when keys are deleted

2018-10-26 Thread Sven
Hello!

I am working with the *Composite Association Proxies* example available 
under the following link:

https://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html#composite-association-proxies

*My problem: *when I delete the keys of the dictionary, SQLAlchemy let the 
values of the dictionary in the database. The keys are deleted, but not the 
values.

I didn't modify the functioning of the Composite Association Proxies 
example. I just created and stored an "User" object before trying to delete 
the keys from the dictionary. 

*Here is the code:*
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.orm import relationship, backref, Session
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm.collections import attribute_mapped_collection


engine = create_engine('sqlite:///foo.db')
Base = declarative_base()


class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = Column(String(64))


# the same 'user_keywords'->'keyword' proxy as in
# the basic dictionary example
keywords = association_proxy(
'user_keywords',
'keyword',
creator=lambda k, v:
UserKeyword(special_key=k, keyword=v)
)


def __init__(self, name):
self.name = name


class UserKeyword(Base):
__tablename__ = 'user_keyword'
user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
keyword_id = Column(Integer, ForeignKey('keyword.id'),
primary_key=True)
special_key = Column(String)
user = relationship(User, backref=backref(
"user_keywords",
collection_class=attribute_mapped_collection("special_key"),
cascade="all, delete-orphan"
)
)


# the relationship to Keyword is now called
# 'kw'
kw = relationship("Keyword")


# 'keyword' is changed to be a proxy to the
# 'keyword' attribute of 'Keyword'
keyword = association_proxy('kw', 'keyword')


class Keyword(Base):
__tablename__ = 'keyword'
id = Column(Integer, primary_key=True)
keyword = Column('keyword', String(64))


def __init__(self, keyword):
self.keyword = keyword


Base.metadata.create_all(engine)
session = Session(engine)

john = User("john")

session.add(john)

john.keywords["k1"] = "v1"
john.keywords["k2"] = "v2"

session.commit()

del john.keywords["k2"]
del john.keywords["k1"]

session.commit()

*Here are the values stored in the database after the two commits:*

*Table USER:*

ID - NAME
1  -  "john"

*Table USER_KEYWORD:*

USER_ID  -  KEYWORD_ID  - SPECIAL_KEY
The table is empty.

*Table KEYWORD:*

ID  -  KEYWORD
1   -  "v1" 
2   -  "v2"

Is it possible to configure SQLAlchemy in order to also delete the values 
of the dictionary from the database when the keys are deleted?

I don't know if it can explains something but I am using PostgreSQL.

Thank you for your help!

Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Generic Associations - table_per_association: parent attribute

2018-07-27 Thread Sven
Hi Mike,

Ok, let's forget everything I said before, it is too confusing.

I propose to start from the *table_per_association* example:

http://docs.sqlalchemy.org/en/latest/_modules/examples/generic_associations/table_per_association.html

Would it be possible to have an attribute *address.parent* linked to 
*Customer*/*Supplier*? Exactly like in the *table_per_related* example?

for customer in session.query(Customer):
for address in customer.addresses:
print(address)
print(address.parent) # this attribute is what I need

Thank you!

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Generic Associations - table_per_association: parent attribute

2018-06-24 Thread Sven
Hello,

Any idea regarding my problems ?

Thank you !

Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Generic Associations - table_per_association: parent attribute

2018-06-10 Thread Sven
Hello,

has sometone an answer or an idea which I can study ?

I also tried the table_per_related example but It will not works since my 
"Object" instances have to be able to be stored by hands and by 
ObjectContainers and to be able to go from Hands to ObjectContainers (and 
vice versa). Indeed, a player can get objects from a chest and hold them in 
Hands. I can't store Hands.Object into ObjectContainer.objects which would 
be only supposed to contain ObjectContainer.Object instances...

Thank you very much.

Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Generic Associations - table_per_association: parent attribute

2018-05-31 Thread Sven


Hello,


Today I have questions regarding generic associations and more specifically 
the *table_per_association* example:


http://docs.sqlalchemy.org/en/latest/_modules/examples/generic_associations/table_per_association.html


I am trying to adapt it to the following case:


   - A class *Object* which represents objects in the game (like potions or 
   weapons)
   - A class *ObjectContainer* which represents a container with a list of 
   objects contained in it (a chest for example)
   - A class *Hands* which represents the hands of a player. These hands 
   can hold objects. However, the class Hands must not directly contain the 
   list of the objects held by the players. The class Hands finds them in 
   another way

Here is my code:


from sqlalchemy.ext.declarative import as_declarative, declared_attr
from sqlalchemy import create_engine, Integer, Column, \
String, ForeignKey, Table
from sqlalchemy.orm import Session, relationship

@as_declarative()

class Base(object):
"""Base class which provides automated table name
and surrogate primary key column.
"""

@declared_attr
def __tablename__(cls):
return cls.__name__.lower()
id = Column(Integer, primary_key=True)

class Object(Base):
name = Column(String, nullable=False)

class HasObjects(object):

@declared_attr
def objects(cls):
object_association = Table(
"%s_objects" % cls.__tablename__,
cls.metadata,
Column("object_id", ForeignKey("object.id"),
primary_key=True),
Column("%s_id" % cls.__tablename__,
ForeignKey("%s.id" % cls.__tablename__),
primary_key=True),
)
return relationship(Object, secondary=object_association)

# The following line doesn't works :
#
#   return relationship(Object, secondary=object_association, 
backref="parent")
#
# Error :
# 
# sqlalchemy.exc.ArgumentError: Error creating backref 'parent' on 
# relationship 'ObjectContainer.objects': property of that name 
exists on mapper 
# 'Mapper|Object|object'

class ObjectContainer(HasObjects, Base):
name = Column(String)

class Hands(HasObjects, Base):
name = Column(String)

engine = create_engine('sqlite://', echo=True)
Base.metadata.create_all(engine)

session = Session(engine)

session.add_all([
ObjectContainer(
name='Chest 1',
objects=[
Object(name="potion 1"),
Object(name="potion 2")
]
),

Hands(
name="Hands player 1",
objects=[
Object(name="potion 3"),
Object(name="potion 4")
]
),
])

session.commit()



I have two questions:


   1. How could I have a parent attribute in *Object* linked to 
   *ObjectContainer* or *Hands*? I tried with backref but it doesn’t seems 
   to work (see comments in the code)
   2. How could I avoid the fact that *Hands*, with this *HasObjects* 
   mixin, automatically get a list of objects? I only need to have the parent 
   attribute of *Object* linked to *Hands* but I don’t need to have any 
   list of the objects in *Hands*. Of course, I could ignore that and let 
   the list be created but it’s a bit dirty

I assume that the answers are pretty simple but I think my comprehension of 
the “secondary” parameter of relationship is not good enough to find a 
solution.


Thank you!


Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Relationships - crash when class instantiation

2018-02-26 Thread Sven
Thank you Mike, it works !

Le samedi 24 février 2018 16:35:57 UTC+1, Mike Bayer a écrit :
>
> On Sat, Feb 24, 2018 at 5:33 AM, Sven > 
> wrote: 
> > Hello, 
> > 
> > I have now a small example which illustrates the problem. 
> > 
> > Program logic : each NPC inherites from Character and each NPC is based 
> on a 
> > prototype. It means, for example, that you can define the prototype 
> > "red_guard" and then create 100 red guards all based on this prototype. 
> Most 
> > part of the NPC attributes are stored in the class Prototype. For 
> example, I 
> > can define in the prototype that a red guard has 1000 health point and 
> it 
> > will be reflected automatically in every NPC based on the prototype. 
> This is 
> > why, the __getattr_ function of NPC is looking in the prototype 
> attributes. 
>
>
> your NPC class is making it impossible to test if the class has an 
> attribute present or not and additionally `__getattr__()` assumes that 
> the "self.prototype" attribute is present, when it's not, causing an 
> endless loop: 
>
> class Character(object): 
> def __init__(self): 
> if hasattr(self, '_foo'): 
> print("yup") 
>
>
> class Prototype(object): 
> pass 
>
>
> class NPC(Character): 
> def __init__(self): 
> Character.__init__(self) 
> self.prototype = Prototype() 
>
> def __getattr__(self, nom_attr): 
> return getattr(self.prototype, nom_attr) 
>
> NPC() 
>
>
> using pdb  inside of __getattr__ would reveal that the attributes 
> being asked for here start with underscores, so one way to work around 
> this is: 
>
> def __getattr__(self, nom_attr): 
> if nom_attr.startswith("_"): 
> return object.__getattribute__(self, nom_attr) 
> else: 
> return getattr(self.prototype, nom_attr) 
>
>
>
>
>
>
>
>
> > 
> > Code: 
> > 
> > from sqlalchemy import create_engine 
> > from sqlalchemy.orm import sessionmaker 
> > from sqlalchemy.ext.declarative import declarative_base 
> > from sqlalchemy import Column, Integer, String, ForeignKey 
> > from sqlalchemy import orm 
> > from sqlalchemy.orm import relationship 
> > 
> > BaseBDD = declarative_base() 
> > engine = create_engine('sqlite:///getattr_loop.db', echo=True) 
> > Session = sessionmaker(bind=engine) 
> > session = Session() 
> > 
> > class Prototype(BaseBDD): 
> > __tablename__ = "prototypes" 
> > id = Column(Integer, primary_key=True) 
> > 
> > npc = relationship("NPC", back_populates="prototype") 
> > 
> > class Character(BaseBDD): 
> > __tablename__ = "personnages" 
> > id = Column(Integer, primary_key=True) 
> > 
> > type = Column(String) 
> > 
> > __mapper_args__ = { 
> > 'polymorphic_on':type, 
> > 'polymorphic_identity':'personnage' 
> > } 
> > 
> > class NPC(Character): 
> > 
> > prototype_id = Column(ForeignKey('prototypes.id')) 
> > prototype = relationship("Prototype", back_populates="npc") 
> > 
> > __mapper_args__ = { 
> > 'polymorphic_identity':'npc' 
> > } 
> > 
> > def __init__(self, prototype): 
> > Character.__init__(self) 
> >     self.prototype = prototype 
> > prototype.npc.append(self) 
> > 
> > def __getattr__(self, nom_attr): 
> > return getattr(self.prototype, nom_attr) 
> > 
> > BaseBDD.metadata.create_all(engine) 
> > 
> > proto_jean = Prototype() 
> > jean = NPC(proto_jean) 
> > 
> > Trace: 
> > 
> > Traceback (most recent call last): 
> >   File "C:\Users\Sven\Downloads\loop_test_trace.py", line 52, in 
>  
> > jean = NPC(proto_jean) 
> >   File "", line 2, in __init__ 
> >   File 
> "C:\Python34\lib\site-packages\sqlalchemy\orm\instrumentation.py", 
> > line 379, in _new_state_if_none 
> > if hasattr(instance, self.STATE_ATTR): 
> >   File "C:\Users\Sven\Downloads\loop_test_trace.py", line 47, in 
> __getattr__ 
> > return getattr(self.prototype, nom_attr) 
> >   File "C:\Python34\lib\site-packages\sqlalchemy\orm\attributes.py", 
> line 
> > 242, in __get__ 
> > return self.impl.get(instance_state(instance), dict_) 
> > 

[sqlalchemy] Re: Relationships - crash when class instantiation

2018-02-24 Thread Sven
Hello,

I have now a small example which illustrates the problem.

Program logic : each NPC inherites from Character and each NPC is based on 
a prototype. It means, for example, that you can define the prototype 
"red_guard" and then create 100 red guards all based on this prototype. 
Most part of the NPC attributes are stored in the class Prototype. For 
example, I can define in the prototype that a red guard has 1000 health 
point and it will be reflected automatically in every NPC based on the 
prototype. This is why, the __getattr_ function of NPC is looking in the 
prototype attributes.

*Code:*

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy import orm
from sqlalchemy.orm import relationship

BaseBDD = declarative_base()
engine = create_engine('sqlite:///getattr_loop.db', echo=True)
Session = sessionmaker(bind=engine)
session = Session()

class Prototype(BaseBDD):
__tablename__ = "prototypes"
id = Column(Integer, primary_key=True)

npc = relationship("NPC", back_populates="prototype")
   
class Character(BaseBDD):
__tablename__ = "personnages"
id = Column(Integer, primary_key=True)

type = Column(String)

__mapper_args__ = {
'polymorphic_on':type,
'polymorphic_identity':'personnage'
}

class NPC(Character):

prototype_id = Column(ForeignKey('prototypes.id'))
prototype = relationship("Prototype", back_populates="npc")

__mapper_args__ = {
'polymorphic_identity':'npc'
}

def __init__(self, prototype):
Character.__init__(self)
self.prototype = prototype
prototype.npc.append(self)

def __getattr__(self, nom_attr):
return getattr(self.prototype, nom_attr)

BaseBDD.metadata.create_all(engine)

proto_jean = Prototype()
jean = NPC(proto_jean)

*Trace:*

Traceback (most recent call last):
  File "C:\Users\Sven\Downloads\loop_test_trace.py", line 52, in 
jean = NPC(proto_jean)
  File "", line 2, in __init__
  File "C:\Python34\lib\site-packages\sqlalchemy\orm\instrumentation.py", 
line 379, in _new_state_if_none
if hasattr(instance, self.STATE_ATTR):
  File "C:\Users\Sven\Downloads\loop_test_trace.py", line 47, in __getattr__
return getattr(self.prototype, nom_attr)
  File "C:\Python34\lib\site-packages\sqlalchemy\orm\attributes.py", line 
242, in __get__
return self.impl.get(instance_state(instance), dict_)
  File "C:\Users\Sven\Downloads\loop_test_trace.py", line 47, in __getattr__
return getattr(self.prototype, nom_attr)
  File "C:\Python34\lib\site-packages\sqlalchemy\orm\attributes.py", line 
242, in __get__
return self.impl.get(instance_state(instance), dict_)
  File "C:\Users\Sven\Downloads\loop_test_trace.py", line 47, in __getattr__
return getattr(self.prototype, nom_attr)
  File "C:\Python34\lib\site-packages\sqlalchemy\orm\attributes.py", line 
242, in __get__
return self.impl.get(instance_state(instance), dict_)
  File "C:\Users\Sven\Downloads\loop_test_trace.py", line 47, in __getattr__
return getattr(self.prototype, nom_attr)

Unfortunately, the line "return getattr(self.prototype, nom_attr)" is 
causing an endless loop.

Has someone a solution ? How can I avoid this loop ?

Thank you !

Sven


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Relationships - crash when class instantiation

2018-02-14 Thread Sven
Hello,

Maybe, the way to find out is to press ctrl-C during your test and then 
send me the stack trace.


I am unfortunately not able to get any stack trace with ctrl-C. I don't 
know why. It has not effect during the crash which last a few seconds.

Haven't seen many endless loop issues over the years but ctrl-C should give 
you a stack trace.   Also try calling sqlalchemy.orm.configure_mappers() 
first and see if that's where it's hanging.


 Calling sqlalchemy.orm.configure_mappers() before the Weapon(arg1, arg2) 
works fine. The problem is not here.

I must honestly say that I am a bit desesperate. If you have not often seen 
crashes like this during all this years, Mike, it certainly means that the 
problem is vicious and caused by some errors in our side. I expect to spend 
a huge amount of time trying to recreate my model step by step, hoping to 
find the problem. It could even be caused by our metaclasses or something 
like that.

SQLAlchemy is open source. Is it possible to download the source and 
execute SQLAlchemy's code step by step with a Python debugguer like PDB ? 
Would it not be easier ?

Thank you for your help.

Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Relationships - crash when class instantiation

2018-02-07 Thread Sven
Thank you Mike.

I'll try that :-) I'll keep you informed.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Relationships - crash when class instantiation

2018-02-07 Thread Sven
Hello Gaston,

In deed, there is no mistakes in my example. That was just here in order to 
illustrate my explanations. 

I tried to reproduce the problem, without success. If the problem is 
vicious, I could spend a huge amount of time trying to reproduce it.

This is why I have thought It could be a good idea to ask here if someone 
already met this kind of crashes or if it's possible to identify which 
SQLAlchemy's operation is crashing.

Could I for example download SQLAlchemy's sources code and try to execute 
everything step by step with pdb ?

Thank you !


Le mercredi 7 février 2018 14:13:57 UTC+1, tonthon a écrit :

> Hi,
>
> There's nothing that could cause the problem you describe in the example 
> you provide maybe the guilty lines where not included.
>
> The easier way to go should be to step by step reproduce your model 
> structure until you face the problem you describe and then post an example 
> code :  http://stackoverflow.com/help/mcve.
>
> Regards
> Gaston
>
>
>
> Le 07/02/2018 à 12:09, Sven a écrit :
>
> Hello everybody,
>
> Today, I have a strange problem regarding relationships.
>
> My project contains already several relationships and until now, I never 
> had any problem with these. But now, when I try to add a new relationship 
> between two classes, the entire program suddenly crashes without displaying 
> any error message. It looks like an infinite loop.
>
> I am sadly not able to reproduce the problem in a small example.
>
> I have a lot of classes with several relationships and the informations 
> are sometimes redundant and not optimized. For example:
>
> Player has a relationship with Race.
> Race has a relationship with Skeleton.
> Player has a relationship with Skeleton.
>
> It means that the Skeleton is also present in Player while the Skeleton 
> may also be accessible via Race...
>
> This is due to the fact that I am working from an existing project whose 
> save system was based on Pickle.
>
> Let's take the following example and let's assume that it reflects my 
> problem and crashes.
>
> class Player(Base):
>
>  __tablename__ = 'player'
>  id = Column(Integer, primary_key=True)
>
>  id_weapon = Column(Integer, ForeignKey('weapon.id'))
>  weapon = relationship("Weapon", back_populates="players")
>
>  id_room = Column(Integer, ForeignKey('room.id'))
>  room = relationship("Room", back_populates="players")
>
>  ... other relationships
>
> class Weapon(Base):
>
>  __tablename__ = 'weapon'
>  id = Column(Integer, primary_key=True)
>
>  players = relationship("Player", back_populates="weapon")
>
>  ... other relationships
>
>
>  def __init__(arg1, arg2):
>  print("I will never be executed.")
>  ... some code
>
> class Room(Base):
>
>  __tablename__ = 'room'
>  id = Column(Integer, primary_key=True)
>
>  players = relationship("Player", back_populates="room")
>
>
>  ... other relationships
>
>
>
> I tried to determine which line makes the whole thing crashes and it 
> happens when I try to do this:
>
> *w = Weapon(arg1, arg2)*
>
> What is strange is that the __init__ is not executed. It crashes between 
> the call *Weapon(arg1, arg2)* and the __init__. The print function* 
> print("I will never be executed.")* will for example not be executed. 
>
> And I have just one __init__ in this class. 
>
>
> When I delete one of the relationships in the class Weapon, everything 
> works fine.
>
> My hypothesis is that SQLAlchemy try to do something with relationships 
> when Weapon(arg1, arg2) is executed and for some reasons, it crashes. Like 
> I said, my relationships network is a bit strange and absolutly not 
> optimized. Is it possible that the relationships and the back_populates are 
> causing Infinite Loops ?
>
> Has someone already see that ? How could I learn more about the problem 
> and figure out what SQLALchemy is trying to do and where it crashes ?
>
> Thank you. 
>
> Sven
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal

[sqlalchemy] Relationships - crash when class instantiation

2018-02-07 Thread Sven
Hello everybody,

Today, I have a strange problem regarding relationships.

My project contains already several relationships and until now, I never 
had any problem with these. But now, when I try to add a new relationship 
between two classes, the entire program suddenly crashes without displaying 
any error message. It looks like an infinite loop.

I am sadly not able to reproduce the problem in a small example.

I have a lot of classes with several relationships and the informations are 
sometimes redundant and not optimized. For example:

Player has a relationship with Race.
Race has a relationship with Skeleton.
Player has a relationship with Skeleton.

It means that the Skeleton is also present in Player while the Skeleton may 
also be accessible via Race...

This is due to the fact that I am working from an existing project whose 
save system was based on Pickle.

Let's take the following example and let's assume that it reflects my 
problem and crashes.

class Player(Base):

 __tablename__ = 'player'
 id = Column(Integer, primary_key=True)

 id_weapon = Column(Integer, ForeignKey('weapon.id'))
 weapon = relationship("Weapon", back_populates="players")

 id_room = Column(Integer, ForeignKey('room.id'))
 room = relationship("Room", back_populates="players")

 ... other relationships

class Weapon(Base):

 __tablename__ = 'weapon'
 id = Column(Integer, primary_key=True)

 players = relationship("Player", back_populates="weapon")

 ... other relationships


 def __init__(arg1, arg2):
 print("I will never be executed.")
 ... some code

class Room(Base):

 __tablename__ = 'room'
 id = Column(Integer, primary_key=True)

 players = relationship("Player", back_populates="room")


 ... other relationships



I tried to determine which line makes the whole thing crashes and it 
happens when I try to do this:

*w = Weapon(arg1, arg2)*

What is strange is that the __init__ is not executed. It crashes between 
the call *Weapon(arg1, arg2)* and the __init__. The print function* 
print("I will never be executed.")* will for example not be executed. 

And I have just one __init__ in this class. 


When I delete one of the relationships in the class Weapon, everything 
works fine.

My hypothesis is that SQLAlchemy try to do something with relationships 
when Weapon(arg1, arg2) is executed and for some reasons, it crashes. Like 
I said, my relationships network is a bit strange and absolutly not 
optimized. Is it possible that the relationships and the back_populates are 
causing Infinite Loops ?

Has someone already see that ? How could I learn more about the problem and 
figure out what SQLALchemy is trying to do and where it crashes ?

Thank you. 

Sven

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Dictionaries with mapped objects as keys and integers as values

2017-11-30 Thread Sven
It works admirably well. Again, thank you very much for the support you 
provide.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Dictionaries with mapped objects as keys and integers as values

2017-11-29 Thread Sven
Hello everybody,

Is it possible to map dictionaries whose keys are objects and the values 
simple integers?

I have the following case :

*In the program, there is one instance of "Options" which contains a 
dictionary. This dictionary has players as keys and integers as values. 
These integers represents the options of the player used as a key.* 

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer

Base = declarative_base()

class Options(Base):

__tablename__ = "options"
id = Column(Integer, primary_key=True)

def __init__(self):
self.options = {} # Player -> Integer

def set_options(self, player, value):
self.options[player] = value

class Player(Base):

__tablename__ = "players"
id = Column(Integer, primary_key=True)


opt = Options()

john = Player()
jack = Player()

opt.set_options(john, 2)
opt.set_options(jack, 5)

print(opt.options)


Display :

>>> 
{<__main__.Player object at 0x05611908>: 5, <__main__.Player object 
at 0x05611860>: 2}
>>> 

Of course, in this particular case, it doesn't make a lot of sense and it 
could be designed in another way. It is just an example. I have a lot of 
dictionaries with objects as keys in my project and I have no idea how I 
should map these... and curiously, I am not able to find any example on 
Internet.

I found in the SQLAlchemy documentation explanations related to 
mapped_collection and it sounds to be a bit what I'm looking for.

sqlalchemy.orm.collections.mapped_collection(keyfunc)
> *"A dictionary-based collection type with arbitrary keying."*


http://docs.sqlalchemy.org/en/latest/orm/collections.html

The Composite Association example seems also to be a good base to do what I 
want :

http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html#composite-association-proxy

Do you have suggestions ? Is it even possible ? What would be the good 
method to map that ?

Thank you :-)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it a bad idea to avoid to use backref and back_populates ?

2017-11-09 Thread Sven
Thank you for your answers and your explanations ! :-)

It is clear to me, now and I will work on that.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it a bad idea to avoid to use backref and back_populates ?

2017-11-08 Thread Sven
What do you mean by "deduplication" ?

I have certainly just a few exotic different type of collection. The others 
are standard (lists, dictionaries, ordereddict, etc), but I don't 
understand why you are asking that :p

Are you asking that because you think that the solution would be to always 
use collections that can't contain duplicate (like set) ? So for example, I 
could make a list class and just overide the append method to avoid 
duplicate ?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Is it a bad idea to avoid to use backref and back_populates ?

2017-11-08 Thread Sven
Thank you for your answer. It is always very complete and interesting. 

*"so the super-duper best way to fix for the above use case, if *
*possible, would be to use collection_class=set rather than list. *
*sets are all around better for relationship() and if I was writing *
*SQLAlchemy today this would have been the default (back in python 2.3, *
*sets were kind of an exotic feature  :)  )"*

The solution with the set works with the specific code sample I gave but 
what about the other possible situations (when having a set is not 
possible) ? I think that I will possibly meet very different type of 
situations and I was more interested to know what would be the good way to 
proceed in general, not in particular cases.

*"Are you seeing objects that are being unpickled, and then are firing *
*off SQLAlchemy events such that collections are being populated twice? *
*  That's not supposed to happen.   Backrefs don't get in the way of *
*normal pickling / unpickling of collections and attributes, that's how *
*major functionality like the whole dogpile.cache example work. *

*Otherwise, I'm not sure how pickling relates to the above code sample *
*where ".append()" is being explicitly called."*

Sorry, that was not what I meant. Until now, each object in memory in the 
program is stored with Pickle.

Let's take an example :

In my program, each instance of the class "Room" contains the list of the 
players located in this particular Room. But each instance of the class 
"Player" also contains the room where the player is.

Here is a code to illustrate :

class Player():

 def __init__(self, name):
 self.name = name
 self.room = None

 def set_room(self, room):
 self.room = room

 def __repr__(self):
 return self.name

class Room():

 def __init__(self, name):
 self.name = name
 self.players = []

 def add_player(self, player):
 self.players.append(player)

 def __repr__(self):
 return self.name

room1 = Room("room1")

player1 = Player("player1")
player2 = Player("player2")

# The room is stored in the player instances :
player1.set_room(room1)
player2.set_room(room1)

# And the players are also stored in the room instance :
room1.add_player(player1)
room1.add_player(player2)

print(room1.players)
print(player1.room)
print(player2.room)


Result :

>>> 
[player1, player2]
room1
room1
>>>

And I have this kind of relationships almost everywhere.

For now, these objects are stored with Pickle. With Pickle, the 
relationships between classes doesn't matter. It just store the objects in 
a file as they are and Unpickle allow us to retrieve everything with just a 
few lines of code.

But now, we are trying to change the save system and to use only 
SQLAlchemy. And I don't know exactly how I am supposed to configure my 
relationships because if backref or back_populates are used, the statement 
"player1.set_room(room1)" will populate room1.players and player1.room 
immediately. And the statement "room1.add_player(player1)" will do the same 
thing and I will have duplicates.

Of course, in this specific example, I could use a set, but it's just a way 
to illustrate the problem that I will meet multiple times in different 
situations and with different collections.

In this specific example, I could also just delete one of the statement, 
but :
- the two statements are sometimes in different classes and different 
files. Is it not dirty to decide that one of the statements will be 
deleted, letting the other do all the work and populate the two attributes ?
- it could be a lot of work because for most of relationships, I will have 
to determine where these statements are, why and when there are used, and 
think of a cleaner way to organize my code according to SQLAlchemy 
principles (just one modification start automatically the modification of 
the linked attribute in the other side). And there is a lot of classes...

I think you are right with the fact that it would be maybe too complicated 
and a bit useless to use SQLAlchemy and still try to keep the model 
absolutly separated and independant from it.

*"You can go this road for one-to-many and many-to-one although this is *
*not a well-traveled use case. *

*For a many-to-many with "secondary", there can still be some conflicts *
*because the unit of work uses "backref" to look at the fact that the *
*two sides represent the same collection to resolve duplicate events *
*that occur on both sides."*

So, I suppose that it is probably a bad idea to avoid to use backref and 
back_populates if it is not a well-traveled use case and if there can be 
some conflicts in many-to-many relations. So, the cleaner way to proceed 
would be to adapt all the classes and always keep only one modification 
statement per bidirectional relationship ?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  ht

[sqlalchemy] Is it a bad idea to avoid to use backref and back_populates ?

2017-11-08 Thread Sven
Hello,

I am actually working on my previous post about the OrderedDict and the 
None values and I met new problems and new questions. I didn't found 
answers on the Internet, so here I am !

As explained in the official documentation (or in the following topic : 
https://groups.google.com/forum/#!msg/sqlalchemy/2dVQTvzmi84/8J8lGpLfw4EJ), 
using backref or back_populates on relationship "sets up the 
"synchronization" of the two directions, which consists of an event that 
performs the complimentary set/append operation when an append/set occurs 
on the other side".

Here is the example gived by the topic previously mentionned :

p = Parent('parent1') 
c = Child('child1') 
c.parent = p 
p.children.append(c) 


*p's children will contain c twice.*

In deed, the "c.parent = p" operation results in the "append" on the other 
side, and vice versa.

In my case, I'm working on an existing project which contains a lot of 
classes and the methods are already designed in order to add the objects on 
the two sides. It is so because the program is using Pickle to persist 
everything (my objectiv is to replace it by SQLAlchemy). Of course, I could 
adapt the code but it will be a lot of work and I don't like the idea that 
my structures are automatically synchronized without explicit declarations. 
Furthermore, what will happen if I decide, one day, to stop using 
SQLAlchemy ? My code would be too dependent on the fonctionnalities 
provided by SQLAlchemy.

So, my questions are :

1) Is it feasible to avoid to use backref or back_populates ? Would it be a 
bad idea to work without these fonctionnalities ? Will I face for example 
inconsistent state of the program ?
2) Is it possible to use back_populates and allow SQLAlchemy to detect that 
it should not append something which was already inserted ?

Thank you very much !

Regards,

Sven


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Relationship: OrderedDict - errors when None values

2017-11-06 Thread Sven
Hello,

I'm actually working on an online-game which is already advanced and pretty 
consequent. As I explained in a previous mail, my goal is to replace the 
save system actually based on Pickle by SQLAlchemy. If it is possible, it 
would be better for me to stay with the initial classes organizations and 
try to map it to the Database without changing anything.

It is a MUD, a textual game. Each player is in a room and each room 
contains exits leading to other rooms. For example :

Player Brian is in a room "kitchen". The room "kitchen" contains an exit 
"north" leading to the room "bathroom". If player Brian type "north", he 
leaves the kitchen and goes in the bathroom.

Here is my classes organization :

Each room contains an instance of the class "Exits". Exits is a class 
container which contains an OrderedDict always based on the same template : 

EXIT_NAMES = OrderedDict()
EXIT_NAMES["south"] = None
EXIT_NAMES["southwest"] = None
EXIT_NAMES["west"] = None
EXIT_NAMES["northwest"] = None
EXIT_NAMES["north"] = None
EXIT_NAMES["northeast"] = None
EXIT_NAMES["east"] = None
EXIT_NAMES["southeast"] = None

Initially, everything is setted by None. The room has no exit. When an exit 
is added, for example at the north of the room, EXIT_NAMES["north"] 
contains an instance of the class "Exit".

Only the values different from None have to be inserted in the Database.

I found the following example about the mapping of the OrderedDict in the 
SQLAlchemy documentation :

http://docs.sqlalchemy.org/en/latest/orm/collections.html#custom-dictionary-based-collections

I tried to use it and here is what I have done so far :

from collections import OrderedDict
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
from sqlalchemy import orm
from sqlalchemy.orm import relationship
from sqlalchemy import Column, Integer, Boolean, String, ForeignKey
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy.orm.collections import column_mapped_collection
from sqlalchemy.util import OrderedDict
from sqlalchemy.orm.collections import MappedCollection, collection
 
Base = declarative_base()
 
EXIT_NAMES = OrderedDict()
EXIT_NAMES["south"] = None
EXIT_NAMES["southwest"] = None
EXIT_NAMES["west"] = None
EXIT_NAMES["northwest"] = None
EXIT_NAMES["north"] = None
EXIT_NAMES["northeast"] = None
EXIT_NAMES["east"] = None
EXIT_NAMES["southheast"] = None
 
class OrderedExits(OrderedDict, MappedCollection):
 
  def __init__(self, *args, **kw):
MappedCollection.__init__(self, keyfunc=lambda node: node.name)
OrderedDict.__init__(self, *args, **kw)
 
class Exits(Base):
 
__tablename__ = "exits_container"
id = Column(Integer, primary_key=True)
exits = relationship("Exit", collection_class=OrderedExits)
 
def __init__(self):
self.exits = OrderedExits(EXIT_NAMES)
 
def __repr__(self): 
text = ""
for s in self.exits.keys():
text = text + s + " : " + str(self.exits
[s]) + "\n"
return text
 
class Exit(Base):
 
__tablename__ = "exit"
id = Column(Integer, primary_key=True)
id_sorties = Column(Integer, ForeignKey("exits_container.id"), 
nullable = False)
 
direction = Column(String)
 
def __init__(self, direction):
self.direction = direction
 
def __repr__(self):
return self.direction
 
if __name__ == '__main__':
 
engine = create_engine(
'postgresql://USERNAME:PASSWORD@localhost/DATABASE', echo = True)
 
Base.metadata.create_all(engine)
 
Session = sessionmaker(bind=engine)
session = Session()
 
south = Exit("south")
west = Exit("west")
   
e = Exits()
 
e.exits["south"] = south
e.exits["west"] = west
 
session.add(south)
session.add(west)
session.add(e)
 
session.commit()
 
print(e)


Here is the error that I get :

Traceback (most recent call last):
  File "C:\Users\Sven\Desktop\SQL Alchemy Tests\ordereddict.py", line 72, 
in 
e = Exits()
  File "", line 4, in __init__
  File "C:\Python34\lib\site-packages\sqlalchemy\orm\state.py", line 415, 
in _initialize_instance
manager.dispatch.init_failure(self, args, 

[sqlalchemy] Re: SQLAlchemy : declarative_base and metaclass conflict

2017-10-20 Thread Sven
Exactly what I was looking for and it works (even applied to my project).

I tried so many things these last days and the solution now looks so simple.

Thank you very much !

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy : declarative_base and metaclass conflict

2017-10-20 Thread Sven
Thank you for your answer.

I'm sorry, I have not been clear enough regarding the "__abstract__ = True".

I suppose that I will have to add this to a lot of classes for the 
following reasons :
1) because it will allow me to persist the classes one by one and still be 
able to run and test the project (i will not have all the errors because 
there is no table name and no primary key). So, I would have to delete the 
"__abstract__" when I begin to work on the persistence of a new class.
2) because I thought that the best solution in this case is to map only the 
concrete classes. So, in my example, I would have to map "Player" and 
"NPC", but not "Character". So only the classes at the bottom of the 
hierarchy would have to be mapped. That's still a lot of classes but 
probably easier to implement.

But I have to say that this is not absolutely clear for me for now. This is 
the first time I use SQLAlchemy. Do you think that this method is possible 
and is the right way to proceed ?

Le vendredi 20 octobre 2017 20:02:40 UTC+2, Mike Bayer a écrit :
>
>
> CONTINUING !  sorry
>
>
> On Fri, Oct 20, 2017 at 11:55 AM, Sven Dumay  > wrote:
>
>>
>> *I tried other things and I found the following solution :*
>>
>> from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
>> from sqlalchemy import Column, Integer
>>
>> class MetaBase(DeclarativeMeta):
>>
>> def __init__(cls, nom, bases, contenu):
>> super(MetaBase, cls).__init__(nom, bases, contenu)
>> print("Init MetaBase")
>>
>> Base = declarative_base(metaclass = MetaBase)
>>
>> class Stockable(Base):
>>
>> __abstract__ = True
>>
>> def __init__(self):
>> print("Init Stockable")
>>
>> class Character(Stockable):
>>
>> __tablename__ = 'characters'
>> id = Column(Integer, primary_key=True)
>>  
>> def __init__(self, name):
>> self.name = name
>> print("Init character")
>>
>>
>> jean = Character("Jean")
>> print(jean.name)
>>
>>
>>
> this seems like roughly the correct approach.
>  
>
>> It seems to work. I get the following result :
>>
>> >>> 
>> Init MetaBase
>> Init MetaBase
>> Init MetaBase
>> Init compte
>> Jean
>> >>>
>>
>> However, the problem with this method is that I have to add *"__abstract__ 
>> = True" *to every class which is inherited by Stockable... so, about 400 
>> classes. 
>>
>
> I don't see why that is.  If these classes are mapped to tables (which, if 
> they are persisted, they are), then there is no reason to add 
> "__abstract__".As in my previous email, how these 400 classes link 
> to tables is what needs to be answered and then we can formulate the 
> correct calling style.
>
>  
>
>> It is not very clean. Is it possible to avoid that by using something 
>> similar to my first code ? It would be great !
>>
>> Thank you very much.
>>
>> Sven
>>
>> -- 
>> SQLAlchemy - 
>> The Python SQL Toolkit and Object Relational Mapper
>>  
>> http://www.sqlalchemy.org/
>>  
>> To post example code, please provide an MCVE: Minimal, Complete, and 
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> description.
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com .
>> To post to this group, send email to sqlal...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLAlchemy : declarative_base and metaclass conflict

2017-10-20 Thread Sven Dumay
Version of Python : 3.4.0 
Version of SQLAlchemy : 1.2.0b2

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy : declarative_base and metaclass conflict

2017-10-20 Thread Sven Dumay
Hello,

I am actually working on an existing project (an online-game) which is 
already advanced and pretty consequent.

My goal is to replace the save system actually based on Pickle by 
SQLAlchemy. Not so easy because I have to deal with the existing classes 
and there is a lot of work to do (about 400 classes to persist).

I'm not sure to know what is the best way to proceed and I think that I 
require some help.

*Let's look at the classes organization of the project :*

<https://lh3.googleusercontent.com/-rfFB3ExzCXg/WeoZakWziFI/AVM/RACvOxxZmSgySRTIswpSNtYO9fd_ht-PgCLcBGAs/s1600/schema2.png>


Every class which should be persistent has to be inherited from Stockable. 
It is already designed this way and I think that it would be too 
complicated to change that. Below Stockable, there is hundred of classes 
with their own hierarchy. For example, Character is inherited from 
Stockable and Player and NPC (Non-player Character) are inherited from 
Character.

My problem today is that I don't know how to proceed regarding the 
metaclass "MetaBase". I am not able to use declarative_base() and MetaBase 
at the same time. There is a metabase conflict. I found some other topics 
about this problem on the internet and I tried several solutions, but 
still, it never works for me. 

*To resume, here is how it basically works without SQLAlchemy :*

class MetaBase(type):

def __init__(cls, nom, bases, contenu):
type.__init__(cls, nom, bases, contenu)
pass

class Stockable(metaclass = MetaBase):

def __init__(self):
pass


class Character(Stockable):
 
def __init__(self):
pass


*Here is what I would like to do with SQLAlchemy:*

from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
from sqlalchemy import Column, Integer

Base = declarative_base()

class MetaBase(DeclarativeMeta):

def __init__(cls, nom, bases, contenu):
super(MetaBase, cls).__init__(nom, bases, contenu)
print("Init MetaBase")


class Stockable(metaclass = MetaBase):

def __init__(self):
print("Init Stockable")

class Character(Stockable, Base):

__tablename__ = 'characters'
id = Column(Integer, primary_key=True)
 
def __init__(self, name):
self.name = name
print("Init character")


jean = Character("Jean")
print(jean.name)


Here is what I get :

>>> 
Traceback (most recent call last):
  File "C:\Users\Sven\Desktop\SQL Alchemy Tests\test2.py", line 10, in 

class Stockable(metaclass = MetaBase):
  File "C:\Users\Sven\Desktop\SQL Alchemy Tests\test2.py", line 7, in 
__init__
super(MetaBase, cls).__init__(nom, bases, contenu)
  File "C:\Python34\lib\site-packages\sqlalchemy\ext\declarative\api.py", 
line 64, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File "C:\Python34\lib\site-packages\sqlalchemy\ext\declarative\base.py", 
line 88, in _as_declarative
_MapperConfig.setup_mapping(cls, classname, dict_)
  File "C:\Python34\lib\site-packages\sqlalchemy\ext\declarative\base.py", 
line 103, in setup_mapping
cfg_cls(cls_, classname, dict_)
  File "C:\Python34\lib\site-packages\sqlalchemy\ext\declarative\base.py", 
line 125, in __init__
clsregistry.add_class(self.classname, self.cls)
  File 
"C:\Python34\lib\site-packages\sqlalchemy\ext\declarative\clsregistry.py", 
line 34, in add_class
if classname in cls._decl_class_registry:
AttributeError: type object 'Stockable' has no attribute 
'_decl_class_registry'
>>>

Does someone knows what it means and how it can be resolved ?

*I tried other things and I found the following solution :*

from sqlalchemy.ext.declarative import declarative_base, DeclarativeMeta
from sqlalchemy import Column, Integer

class MetaBase(DeclarativeMeta):

def __init__(cls, nom, bases, contenu):
super(MetaBase, cls).__init__(nom, bases, contenu)
print("Init MetaBase")

Base = declarative_base(metaclass = MetaBase)

class Stockable(Base):

__abstract__ = True

def __init__(self):
print("Init Stockable")

class Character(Stockable):

__tablename__ = 'characters'
id = Column(Integer, primary_key=True)
 
def __init__(self, name):
self.name = name
print("Init character")


jean = Character("Jean")
print(jean.name)


It seems to work. I get the following result :

>>> 
Init MetaBase
Init MetaBase
Init MetaBase
Init compte
Jean
>>>

However, the problem with this method is that I have to add *"__abstract__ 
= True" *to every class which is inherited by Stockable... so, about 400 
classes. It is not very clean. Is it possible to avoid that by using 
something similar to my first code ? It would be great !

Thank you very much.

Sven

-- 
SQLAl

Re: [sqlalchemy] Re: Column order with declarative base

2014-12-30 Thread Sven Teresniak
Aaah Michael,
thanks!
This is awesome!

I tried a lot and all the time I felt that I missed exact this kind of easy 
answer. ;)

Thanks again.
Will implement this now.

Sven

Am Dienstag, 30. Dezember 2014 17:01:14 UTC+1 schrieb Michael Bayer:
>
> why don’t you set up your PrimaryKeyConstraint directly?
>
> class AbstractPK(NameByClass):
> """ this table defines a frequently used composite primary key """
>
> @declared_attr
> def key1(cls):
> return Column(ForeignKey("somekey.keypart1"), primary_key=True)
>
> @declared_attr
> def key2(cls):
> return Column(ForeignKey("anotherkey.keypart2"), primary_key=True)
>
> key3 = Column( Integer, primary_key=True )
>
> @declared_attr
> def __table_args__(self):
> return (
> PrimaryKeyConstraint('key1', 'key2', 'key3'),
> )
>
>
>
>
> Sven Teresniak > wrote:
>
>
> Am Freitag, 2. Juli 2010 02:24:05 UTC+2 schrieb Michael Bayer:
>
>> The Column object contains a "sort key" when constructed, against a 
>> single global value, that is used as a sort key when the Table is 
>> generated.  This is to get around the fact that the attribute dictionary of 
>> the declarative class is unordered.   
>>
>> The mixin columns should copy their "sort key" over, or it should somehow 
>> be tailored in the declarative base so that the order of the two columns 
>> stays relatively the same, and perhaps is also tailored to be at the same 
>> position relative to the other columns in the ultimate table.
>>
>> I'd welcome any patches in this regard since I don't usually deal with 
>> the "mixin" feature.
>>
> Is there any simple way to modify/set this "sort key" or is there any way 
> for me to workaround this random ordering in the class dict? Or to simple 
> inspect the ordering to generate code that re-orders my primary composite 
> key parts accordingly?
>
> Thanks
> Sven
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com .
> To post to this group, send email to sqlal...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Column order with declarative base

2014-12-30 Thread Sven Teresniak

Am Freitag, 2. Juli 2010 02:24:05 UTC+2 schrieb Michael Bayer:

> The Column object contains a "sort key" when constructed, against a single 
> global value, that is used as a sort key when the Table is generated.  This 
> is to get around the fact that the attribute dictionary of the declarative 
> class is unordered.   
>
> The mixin columns should copy their "sort key" over, or it should somehow 
> be tailored in the declarative base so that the order of the two columns 
> stays relatively the same, and perhaps is also tailored to be at the same 
> position relative to the other columns in the ultimate table.
>
> I'd welcome any patches in this regard since I don't usually deal with the 
> "mixin" feature.
>
Is there any simple way to modify/set this "sort key" or is there any way 
for me to workaround this random ordering in the class dict? Or to simple 
inspect the ordering to generate code that re-orders my primary composite 
key parts accordingly?

Thanks
Sven

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Column order with declarative base

2014-12-30 Thread Sven Teresniak
Am Freitag, 2. Juli 2010 00:06:10 UTC+2 schrieb Chris Withers:
>
> Mike Lewis wrote:
> > I'm trying to do some DDL creation with declarative base.  THe problem
> > I am running into is that I'm using a mixin, and it seems that the
> > order the columns are being created in is different than the order
> > they're declared with.  Is there any way to control this?
>
> Please provide a simple, small example of your problem :-)
>
> Also, is there a reason the order of column creation matters?
>
Yes. When you want to query a composite primary key using Query.get() 
method you need the exact ordering of key parts. 
For example

PRIMARY KEY (key3, key2, key1),

vs.

PRIMARY KEY (key1, key2, key3),

So the position of parts of the composite key in the resulting DDL is very 
important but (more or less) random.
Fore me (and a lot of other people) it seems very difficult to deal with 
this kind of random when generating DDLs using Mixins.

Simple example:

Base = declarative_base()

class NameByClass(object):
""" just to set the name. no table. """

@declared_attr
def __tablename__(cls):
return cls.__name__.lower() 


class SomeKey(NameByClass, Base):
""" some simple table to refer to. table in db. """

keypart1 = Column(Integer, primary_key=True)
value = Column(Unicode)


class AnotherKey(NameByClass, Base):
""" another simple table to refer to. table in db. """

keypart2 = Column(Integer, primary_key=True)
value = Column(Unicode)


class AbstractPK(NameByClass):
""" this table defines a frequently used composite primary key """

@declared_attr
def key1(cls):
return Column(ForeignKey("somekey.keypart1"), primary_key=True)

@declared_attr
def key2(cls):
return Column(ForeignKey("anotherkey.keypart2"), primary_key=True)

key3 = Column( Integer, primary_key=True )


class RealTable(AbstractPK, Base):
""" a real table with composite PK from above and reference to SomeKey 
and AnotherKey """

someothercolumn = Column(Unicode)


if __name__ == "__main__":
print "start"

from sqlalchemy import create_engine
engine = create_engine('postgresql://localhost:5432/dbname', 
echo=True,
encoding="utf-8")

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
print "done"

This leads to

CREATE TABLE realtable (
key3 SERIAL NOT NULL,
someothercolumn VARCHAR,
key2 INTEGER NOT NULL,
key1 INTEGER NOT NULL,
PRIMARY KEY (key3, key2, key1),
FOREIGN KEY(key2) REFERENCES anotherkey (keypart2),
FOREIGN KEY(key1) REFERENCES somekey (keypart1)
)

And this leads to 

session.query(realtable).get( (keypart3, keypart2, keypart1) )

which means: I have to change my code every time the ordering of elements 
in Python's dictionary changes. 

Best wishes for 2015
Sven

>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Autoload=True and schema changes

2010-10-11 Thread Sven A. Schmidt
Michael,

thanks for your reply, which states what I expected is going on behind
the scenes. However, for the case I mentioned (the "env" field that
the mapper could not find anymore) I can say for certain that this
particular table was not changed at all by DDL. This really puzzled
me, obviously. I actually have to admit that I have no proof that
schema changes trip the application. It's just an observation I made
every time this has happened. It may well be that there's something
else happening every time the schema changes. Is there actually any
way at all a mapper could "forget" about a column when there's no DDL
change to the table?

I should really try and reproduce the problem first in a controlled
environment to be sure I'm isolating the correct problem.

Cheers,
Sven

On Oct 11, 6:12 pm, Michael Bayer  wrote:
> On Oct 11, 2010, at 11:17 AM, Sven A. Schmidt wrote:
>
>
>
>
>
> > Hi,
>
> > I've got an issue which pops up from time to time with an SQLAlchemy
> > based webservice application and it revolves around schema changes
> > happening while an engine with autoload=True has been instantiated and
> > is being used to start sessions.
>
> > What happens is that someone on the team will make a schema change
> > (mostly backwards compatible, for example increasing the varchar size
> > of a field) without asking for a restart of the webservice. Subsequent
> > calls to the webservice will fail with quite obscure (i.e. seemingly
> > unrelated) error messages. For example, I've seen errors like the
> > following:
>
> > Error: type object 'AppEnvRpe2Table' has no attribute 'env'
>
> > (AppEnvRpe2Table is the object mapped to an sqlalchemy.Table.) First I
> > thought someone had dropped the column (they'd actually do that...)
> > but to my surprise the given table did have a column 'env' (and it did
> > since the start). A restart of the service (i.e. reload of the schema)
> > always fixed this kind of problem, which led me to believe it's
> > related to the engine's internal representation of the schema somehow
> > conflicting with updates in the database even when it's in places that
> > aren't used in the particular query being executed.
>
> > Obviously, it's never a good idea to change the schema when a process
> > is running, but in this case I have no control over the changes being
> > made. I've warned about it but it happens anyway and then on next
> > execution of the webservice a crash results. Sometimes quite a bit of
> > time between the schema change and the calling of the webservice can
> > occur so it's not always immediately clear that it was the schema
> > change that caused the problem. (I'm actually thinking of keeping this
> > issue around because I've won lots of beers when they complained about
> > crashes and it turned out to be their fault to begin with but you can
> > drink only so much... ;)
>
> > So that being the set-up, I'm wondering what I can do to remedy the
> > situation. Things I considered:
>
> > - Re-create the engine on every request. Bad, because it increases run-
> > time for every request and most of the time (i.e. when there was no
> > schema change) it is unnecessary. There are quite a few tables in the
> > schema and autoload time is not negligible.
>
> > - Re-create the engine after a schema change. The question is how do I
> > programmatically notice a schema change? I can't rely on exceptions,
> > because there's not a predictable error popping up when this occurs.
>
> > - Catch errors and retry with new engine instance. The problem is that
> > I'd have to catch pretty much any exception (unpredictable errors, see
> > above), and I'd possibly obscure other issues by doing so.
>
> > - Get an extra liver and just suffer the consequences...
>
> > Any ideas greatly appreciated!
>
> > PS: This is sqlalchemy 0.5.8 and I cannot easily update to 0.6 if that
> > should be required.
>
> Increasing the size of a VARCHAR field in the DB will never have any effect 
> on the app as it runs.   If the field has already been reflected, your 
> application will never see the new value.   If the field has not yet been 
> reflected, your app, upon reflecting the table, will see the new size, stick 
> it in the type object, and never look at it again.   So that's not the source 
> of the issue.
>
> The only thing that would cause any issues are renames of columns/tables, or 
> drops of columns/tables that are mapped.  If that is the source, then the 
> problem has l

[sqlalchemy] Autoload=True and schema changes

2010-10-11 Thread Sven A. Schmidt
Hi,

I've got an issue which pops up from time to time with an SQLAlchemy
based webservice application and it revolves around schema changes
happening while an engine with autoload=True has been instantiated and
is being used to start sessions.

What happens is that someone on the team will make a schema change
(mostly backwards compatible, for example increasing the varchar size
of a field) without asking for a restart of the webservice. Subsequent
calls to the webservice will fail with quite obscure (i.e. seemingly
unrelated) error messages. For example, I've seen errors like the
following:

Error: type object 'AppEnvRpe2Table' has no attribute 'env'

(AppEnvRpe2Table is the object mapped to an sqlalchemy.Table.) First I
thought someone had dropped the column (they'd actually do that...)
but to my surprise the given table did have a column 'env' (and it did
since the start). A restart of the service (i.e. reload of the schema)
always fixed this kind of problem, which led me to believe it's
related to the engine's internal representation of the schema somehow
conflicting with updates in the database even when it's in places that
aren't used in the particular query being executed.

Obviously, it's never a good idea to change the schema when a process
is running, but in this case I have no control over the changes being
made. I've warned about it but it happens anyway and then on next
execution of the webservice a crash results. Sometimes quite a bit of
time between the schema change and the calling of the webservice can
occur so it's not always immediately clear that it was the schema
change that caused the problem. (I'm actually thinking of keeping this
issue around because I've won lots of beers when they complained about
crashes and it turned out to be their fault to begin with but you can
drink only so much... ;)

So that being the set-up, I'm wondering what I can do to remedy the
situation. Things I considered:

- Re-create the engine on every request. Bad, because it increases run-
time for every request and most of the time (i.e. when there was no
schema change) it is unnecessary. There are quite a few tables in the
schema and autoload time is not negligible.

- Re-create the engine after a schema change. The question is how do I
programmatically notice a schema change? I can't rely on exceptions,
because there's not a predictable error popping up when this occurs.

- Catch errors and retry with new engine instance. The problem is that
I'd have to catch pretty much any exception (unpredictable errors, see
above), and I'd possibly obscure other issues by doing so.

- Get an extra liver and just suffer the consequences...

Any ideas greatly appreciated!

PS: This is sqlalchemy 0.5.8 and I cannot easily update to 0.6 if that
should be required.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: MySQL DATE_ADD function

2010-10-05 Thread Sven A. Schmidt
Hi Bryan,

the only tricky bit in your SQL is the dangling 'DAY', because there's
no operator to tie it to the rest. Otherwise you should be able to
write (schema.AppDcRpe2 is just a Table object I'm using as an
example):

>>> q = 
>>> session.query(schema.AppDcRpe2).filter(func.date_add(schema.AppDcRpe2.asof, 
>>> func.interval(1 - func.dayofweek(schema.AppDcRpe2.asof))) < func.sysdate)
>>> print q
SELECT kdb_app_dc_rpe2.id AS kdb_app_dc_rpe2_id, kdb_app_dc_rpe2.asof
AS kdb_app_dc_rpe2_asof, kdb_app_dc_rpe2.instance_name AS
kdb_app_dc_rpe2_instance_name, kdb_app_dc_rpe2.dc AS
kdb_app_dc_rpe2_dc, kdb_app_dc_rpe2.rpe2_total AS
kdb_app_dc_rpe2_rpe2_total, kdb_app_dc_rpe2.rpe2_used AS
kdb_app_dc_rpe2_rpe2_used, kdb_app_dc_rpe2.rpe2_unused AS
kdb_app_dc_rpe2_rpe2_unused
FROM kdb_app_dc_rpe2
WHERE date_add(kdb_app_dc_rpe2.asof, interval(:dayofweek_1 -
dayofweek(kdb_app_dc_rpe2.asof))) < :date_add_1

which is *almost* what you need. Can MySQL 'INTERVAL' perhaps be
written in 'function form', i.e. something like interval(x, 'DAY')? In
that case you should be able to translate it fully.

Or maybe there's a way to 'abuse' the alias method, like so:

>>> q = 
>>> session.query(schema.AppDcRpe2).filter(func.date_add(schema.AppDcRpe2.asof, 
>>> func.interval(1 - func.dayofweek(schema.AppDcRpe2.asof)).alias('DAY')) < 
>>> func.sysdate)
>>> print q 
>>> SELECT kdb_app_dc_rpe2.id AS kdb_app_dc_rpe2_id, 
>>> kdb_app_dc_rpe2.asof AS kdb_app_dc_rpe2_asof, kdb_app_dc_rpe2.instance_name 
>>> AS kdb_app_dc_rpe2_instance_name, kdb_app_dc_rpe2.dc AS kdb_app_dc_rpe2_dc, 
>>> kdb_app_dc_rpe2.rpe2_total AS kdb_app_dc_rpe2_rpe2_total, 
>>> kdb_app_dc_rpe2.rpe2_used AS kdb_app_dc_rpe2_rpe2_used, 
>>> kdb_app_dc_rpe2.rpe2_unused AS kdb_app_dc_rpe2_rpe2_unused
FROM kdb_app_dc_rpe2, interval(:dayofweek_1 -
dayofweek(kdb_app_dc_rpe2.asof)) "DAY"
WHERE date_add(kdb_app_dc_rpe2.asof, interval(:dayofweek_1 -
dayofweek(kdb_app_dc_rpe2.asof))) < :date_add_1

Except for the double quotes that looks to be pretty close to what you
want. But then again rather than massaging that into place you may as
well build a text SQL from your bits, I guess. The above would also be
MySQL specific, I believe. (BTW I have not tried to run any of this,
this is just the output of the parsed statements.)

-sas


On Oct 5, 4:45 pm, Bryan Vicknair  wrote:
> On Tue, Oct 5, 2010 at 1:58 AM, Chris Withers  wrote:
> > Are you looking for something database agnostic or something that just works
> > for MySQL?
>
> > If the latter, look at text:
> >http://www.sqlalchemy.org/docs/core/tutorial.html#using-text
>
> > If the former, then you'll want a database agnostic implementation. So,
> > what's the above sql actually trying to achieve?
>
> > Chris
>
> I'm fine with a MySQL-only solution.  The text construct is always the 
> fallback,
> but I'm wondering if there is a way that I can use the attributes of my class
> for the column name, instead of just a string.  My column names are going to
> change soon, but my object model will stay the same, so I am trying not to
> explicitly use the column names in my code.
>
> Can I do something like this?
> 'DATE_ADD(' + Class.dateAttr + ', INTERVAL(1 - ' \
>         + 'DAYOFWEEK(' + Class.dateAttr + ')) DAY)'
>
> If I can't use my class's attributes, is there a way I can at least use the
> table object's columns like this:
> 'DATE_ADD(' + table.c.date_col.name + ', INTERVAL(1 - ' \
>         + 'DAYOFWEEK(' + table.c.date_col.name + ')) DAY)'
>
> I prefer one of these to a string because I will get an error during testing
> when the statement is encountered.  With a string, I will only get an error if
> the statement actually runs in the DB.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Use regexp in like

2010-09-21 Thread Sven A. Schmidt
Michael,

I hope I'm not misunderstanding what your trying to achieve, but isn't
a combination of like and not like want you want to do here? As in:

create table test (
 t varchar2(255)
);

insert into test values ('AA123');
insert into test values ('A0123');

select * from test where t like 'A%' and t not like 'AA%';

-> A0123

Or do you want to match "Any character, but only once" rather than "A"
explicitly? The regex you posted ('A[0-9]+) looks like it's really
'A', you're looking for (unless that's just simplified for testing
purposes.

-sas

On Sep 20, 4:38 pm, Michael Hipp  wrote:
> On 9/17/2010 10:12 AM, Michael Bayer wrote:
>
>
>
>
>
>
>
> > On Sep 17, 2010, at 10:58 AM, Michael Bayer wrote:
>
> >> On Sep 17, 2010, at 9:14 AM, Michael Hipp wrote:
>
> >>> On 9/14/2010 2:23 PM, Michael Hipp wrote:
>  Is it possible to use a regexp in a like() clause? Or some other way to 
>  achieve
>  something similar?
>
> >>> Can anyone suggest an approach to search a field with a regexp?
>
> >> if you were using Postgresql, you could use   
> >> somecolumn.op("~")(someregexp)
>
> >>http://www.postgresql.org/docs/8.4/interactive/functions-matching.htm...
>
> >> cant speak for other platforms though you'd have to consult their 
> >> documentation.
>
> > PG also supports MATCH since I see we have some unit tests for that, i.e. 
> > column.match(other).     I'm not seeing offhand in PG's docs what it 
> > interprets the "MATCH" operator as, i.e. is it a "~", "SIMILAR TO", not 
> > sure.
>
> > I guess the reasons I've never had interest in regexp matching in databases 
> > are:
>
> > 1. its always bad to search through tables without being able to use indexes
> > 2. if you're needing to dig into text, it suggests the atoms of that text 
> > should be represented individually in their own column (i.e. normalize)
> > 3. no really, I'm doing flat out full text searching on documents and don't 
> > want to reinvent.  Well then I'd use a full blown text extension 
> > (http://www.postgresql.org/docs/8.3/static/textsearch.html)  or a separate 
> > search engine.
>
> Thanks for the good suggestions on this. My need is pretty simple. I have a
> column that contains values like this:
>
> A100
> AA309
> B101
>
> I need something to find all the rows that look like Axxx while excluding 
> those
> that look like AAxxx. The LIKE operator doesn't seem to be able to do that. I
> am using PostgreSQL so the ~ operator may be my best bet.
>
> Scratch that ... found this message:
>    http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg18598.html
> which says I should be able to do a 'SIMILAR TO' construct which is perhaps
> somewhat more lightweight than a full regexp.
>
> Thanks,
> Michael

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: DDL and TypeError: 'dict' object does not support indexing

2010-08-30 Thread Sven A. Schmidt
Just wondering if it's any different if you try the tripe quote syntax
"""...""". For example in a similar case I use

  q = """
select
  os,
  count(os)
from (
 select
   distinct
   s.id,
   os
 from
   server s
   join instance_server ins on s.id = ins.server_id
   join instance i on ins.instance_servers_id = i.id
   join nar n on i.nar_id = n.id
 where
   upper(n.nar_name) = upper(:app)
   and host_type = 'PHYSICAL'
)
group by os
order by os
  """
  q = text(q)
  conn = session.connection()
  rows = conn.execute(q, app=app).fetchall()

with the additional benefit that I can copy the SQL verbatim to an SQL
editor for testing.

-sas

On Aug 30, 7:00 pm, Petr Kobalíček  wrote:
> Hi Michael,
>
> triple escaping works, sorry for misinformation.
>
> Best regards
> Petr

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: mixed up relationships

2010-08-12 Thread Sven A. Schmidt
Carl,

the formatting got a bit messed up but if I read your definition
correctly you defined the relation as 'splits'. So you'd want to write
"trainingEffor.splits" (lower case 's' in splits).

-sas

On Aug 12, 9:20 am, Bleve  wrote:
> I'm using SQLAlchemy 0.6.3 on Python 2.6 (win32)
>
> I have a class, Efforts, which has many splits (time and distance and
> some other fluff), so for each training effort, there can be one or
> more splits.  It has a few other relationships as well, but this is
> the important one, or at least, the one I'm stuck on at the moment!
>
> The definition in my script :
>
> class Efforts(Base):
>     __tablename__ = 'efforts'
>
>     id = Column(Integer, primary_key = True)
>     effortTypeId = Column(Integer, ForeignKey("effortTypes.id"),
> nullable=False)
>     riderId = Column(Integer, ForeignKey("riders.id"), nullable=False)
>     sessionId = Column(Integer, ForeignKey("trainingSessions.id"),
> nullable=False)
>     .
>     # stuff trimmed to save space
>     .
>
>     splits = relationship(Splits, backref='efforts',
> order_by=Splits.sequenceNumber)
>     rider = relationship(Riders, backref='efforts',
> order_by=Riders.id)
>     session = relationship(TrainingSessions, backref='efforts',
> order_by= TrainingSessions.id)
>
>     def __init__(self, effortTypeId, riderId, sessionId, distance,
> time, maxpower, fiveSecPower, \
>     maxTorque, startTime, temperature, pressure, humidity, windSpeed,
> comments, timingAccuracy, \
>     gearInches, seated, standingStartType, startingSpeed, startingLeg,
> riderWeight, bikeWeight, \
>     extraWeight, borgRPE):
>         self.effortTypeId = effortTypeId
>         self.riderId = riderId
>         self.sessionId = sessionId
>
>        # stuff trimmed ...
>
> and the 'splits' definition :
>
> class Splits(Base):
>     __tablename__ = 'splits'
>
>     id = Column(Integer, primary_key = True)
>     effort = Column(Integer, ForeignKey("efforts.id"), nullable=False)
>     sequenceNumber = Column(Integer, nullable=False, default = 1)
>     distance = Column(Float, nullable=False)
>     time = Column(Float, nullable=False)
>     timingAccuracy = Column(Float, default = 0.1)
>
>     def __init__(self, effort, sequenceNumber, distance, time,
> timingAccuracy):
>         self.effort = effort
>         self.sequenceNumber = sequenceNumber
>         self.distance = distance
>         self.time = time
>         self.timingAccuracy = timingAccuracy
>
> I've stuffed something up, because when I create an 'effort' :
>
>        trainingEffort = stDataClasses.Efforts(
>                         effortTypeId = ChosenEffortId,\
>                         riderId = self.rider.id,\
>                         sessionId = self.thisSession.id,\
>                         .
>                         .
>                         .
>                         )
>         print trainingEffort
>         print "splits in trainingEffort :", trainingEffort.Splits
>
> I should, I think, see a null array when I ask for
> trainingEffort.Splits, but instead I see this :
>
> AttributeError: 'Efforts' object has no attribue 'Splits'
>
> Can anyone here untangle my mess?  My python skill level is still very
> much a novice, I've read the SQLAlchemy doco for the ORM but I think
> something's just not sinking in.
>
> Thank you for any pointers in the right direction!
>
> Carl

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SAWarning: Got 'None' querying 'table_name' from all_cons_columns - does the user have proper rights to the table?

2010-08-10 Thread Sven A. Schmidt
Michael,

I've found the problem when I stepped back and started from scratch.
It turns out that the error message is actually just a warning (and
I'm still getting it) but the schema reflection works despite this
message. I wrongly assumed this is why my script is failing but it's
actually due to this part of my schema declaration:

  forecasts_line_items = Table('forecasts_line_items', metadata,
...
Column('line_item_id', Integer, ForeignKey('line_items.id')),
...
autoload=True, useexisting=True, schema=conf.schema
  )

Due a problem with the schema I'm accessing -- it does not always have
FKs set up properly -- I needed to declared the FKs manually for sqla
to pick up the relations. It turns out that I need to write
'SAS.line_items.id' in this case (not sure if the upper case is really
needed but there was a problem with lower case schema prefixes in the
past). This didn't occur to me, because the schema prefix is
automatically applied in other cases, like table and sequence names.
Only when I went back to the bare minimum did I notice it starting to
fail when the ForeignKey declaration came in.

This is also the one difference to the old working code where all FKs
were in place and I never needed to specify ForeignKey(...)

Sorry for the noise,
Sven

On Aug 10, 4:10 pm, "Sven A. Schmidt"  wrote:
> Thanks, Michael. I wish I could update to 0.6.3 but unfortunately I'll
> have to stick with the deployed version of 0.5.5 for now. But in any
> case the good news is that this used to work once. It's just that
> quite a few parameters are at work here so it may be difficult to
> track down why it's failing now. Maybe your looking into it will give
> the angle to see what really made this break.
>
> I'm currently trying to avoid autoload=True (and thereby the failing
> schema queries, I assume) by manually specifying all columns (it's
> just 4-5 tables, fortunately). Maybe that'll help me work around the
> issue. I'll report back once I know more.
>
> Cheers,
> Sven
>
> On Aug 10, 3:44 pm, Michael Bayer  wrote:
>
>
>
> > On Aug 10, 2010, at 9:28 AM, Sven A. Schmidt wrote:
>
> > > Getting this out of the way first, because I always forget ;) :
> > > SQLAlchemy-0.5.5, Python 2.6
>
> > I'll take a look at this later but you should probably be tracking down the 
> > issue in 0.6.3, assuming its still present - that's where we'd fix any 
> > issues.
>
> > > I'm getting the above error when trying to connect to an Oracle schema
> > > 'TEST' and read from another schema 'SAS' where tables are exposed via
> > > synonyms (permissions have been granted) and I'm hoping that someone
> > > on this list may be able to cast some light on what's going on. I've
> > > googled for the error but only found the source where this exception
> > > is being raised.
>
> > > I've tried connecting with echo=True and running the SQL by hand from
> > > the TEST account and the strange thing is I get no error but a list of
> > > fields as I would expect (see below). First, here's the echo output of
> > > my script:
>
> > > 2010-08-10 14:56:00,221 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > > select COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION,
> > > DATA_SCALE, NULLABLE, DATA_DEFAULT from ALL_TAB_COLUMNS where
> > > TABLE_NAME = :table_name and OWNER = :owner
> > > INFO select COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION,
> > > DATA_SCALE, NULLABLE, DATA_DEFAULT from ALL_TAB_COLUMNS where
> > > TABLE_NAME = :table_name and OWNER = :owner
> > > 2010-08-10 14:56:00,221 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > > {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > > INFO {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > > 2010-08-10 14:56:00,230 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > > SELECT
> > >             ac.constraint_name,
> > >             ac.constraint_type,
> > >             loc.column_name AS local_column,
> > >             rem.table_name AS remote_table,
> > >             rem.column_name AS remote_column,
> > >             rem.owner AS remote_owner
> > >           FROM all_constraints ac,
> > >             all_cons_columns loc,
> > >             all_cons_columns rem
> > >           WHERE ac.table_name = :table_name
> > >           AND ac.constraint_type IN ('R','P')
> > >           

[sqlalchemy] Re: SAWarning: Got 'None' querying 'table_name' from all_cons_columns - does the user have proper rights to the table?

2010-08-10 Thread Sven A. Schmidt
Thanks, Michael. I wish I could update to 0.6.3 but unfortunately I'll
have to stick with the deployed version of 0.5.5 for now. But in any
case the good news is that this used to work once. It's just that
quite a few parameters are at work here so it may be difficult to
track down why it's failing now. Maybe your looking into it will give
the angle to see what really made this break.

I'm currently trying to avoid autoload=True (and thereby the failing
schema queries, I assume) by manually specifying all columns (it's
just 4-5 tables, fortunately). Maybe that'll help me work around the
issue. I'll report back once I know more.

Cheers,
Sven

On Aug 10, 3:44 pm, Michael Bayer  wrote:
> On Aug 10, 2010, at 9:28 AM, Sven A. Schmidt wrote:
>
> > Getting this out of the way first, because I always forget ;) :
> > SQLAlchemy-0.5.5, Python 2.6
>
> I'll take a look at this later but you should probably be tracking down the 
> issue in 0.6.3, assuming its still present - that's where we'd fix any issues.
>
>
>
>
>
> > I'm getting the above error when trying to connect to an Oracle schema
> > 'TEST' and read from another schema 'SAS' where tables are exposed via
> > synonyms (permissions have been granted) and I'm hoping that someone
> > on this list may be able to cast some light on what's going on. I've
> > googled for the error but only found the source where this exception
> > is being raised.
>
> > I've tried connecting with echo=True and running the SQL by hand from
> > the TEST account and the strange thing is I get no error but a list of
> > fields as I would expect (see below). First, here's the echo output of
> > my script:
>
> > 2010-08-10 14:56:00,221 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > select COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION,
> > DATA_SCALE, NULLABLE, DATA_DEFAULT from ALL_TAB_COLUMNS where
> > TABLE_NAME = :table_name and OWNER = :owner
> > INFO select COLUMN_NAME, DATA_TYPE, DATA_LENGTH, DATA_PRECISION,
> > DATA_SCALE, NULLABLE, DATA_DEFAULT from ALL_TAB_COLUMNS where
> > TABLE_NAME = :table_name and OWNER = :owner
> > 2010-08-10 14:56:00,221 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > INFO {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > 2010-08-10 14:56:00,230 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > SELECT
> >             ac.constraint_name,
> >             ac.constraint_type,
> >             loc.column_name AS local_column,
> >             rem.table_name AS remote_table,
> >             rem.column_name AS remote_column,
> >             rem.owner AS remote_owner
> >           FROM all_constraints ac,
> >             all_cons_columns loc,
> >             all_cons_columns rem
> >           WHERE ac.table_name = :table_name
> >           AND ac.constraint_type IN ('R','P')
> >           AND ac.owner = :owner
> >           AND ac.owner = loc.owner
> >           AND ac.constraint_name = loc.constraint_name
> >           AND ac.r_owner = rem.owner(+)
> >           AND ac.r_constraint_name = rem.constraint_name(+)
> >           -- order multiple primary keys correctly
> >           ORDER BY ac.constraint_name, loc.position, rem.position
> > INFO SELECT
> >             ac.constraint_name,
> >             ac.constraint_type,
> >             loc.column_name AS local_column,
> >             rem.table_name AS remote_table,
> >             rem.column_name AS remote_column,
> >             rem.owner AS remote_owner
> >           FROM all_constraints ac,
> >             all_cons_columns loc,
> >             all_cons_columns rem
> >           WHERE ac.table_name = :table_name
> >           AND ac.constraint_type IN ('R','P')
> >           AND ac.owner = :owner
> >           AND ac.owner = loc.owner
> >           AND ac.constraint_name = loc.constraint_name
> >           AND ac.r_owner = rem.owner(+)
> >           AND ac.r_constraint_name = rem.constraint_name(+)
> >           -- order multiple primary keys correctly
> >           ORDER BY ac.constraint_name, loc.position, rem.position
> > 2010-08-10 14:56:00,230 INFO sqlalchemy.engine.base.Engine.0x...4b90
> > {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > INFO {'owner': 'SAS', 'table_name': 'FORECASTS'}
> > /Library/Python/2.6/si

[sqlalchemy] SAWarning: Got 'None' querying 'table_name' from all_cons_columns - does the user have proper rights to the table?

2010-08-10 Thread Sven A. Schmidt
AND ac.constraint_name = loc.constraint_name
   AND ac.r_owner = rem.owner(+)
   AND ac.r_constraint_name = rem.constraint_name(+)
   -- order multiple primary keys correctly
   ORDER BY ac.constraint_name, loc.position, rem.position;

and got

"CONSTRAINT_NAME"   "CONSTRAINT_TYPE"   "LOCAL_COLUMN"  "REMOTE_TABLE"  
"REMOTE_COLUMN" "REMOTE_OWNER"
"FK5E6775D8575C2425""R" "FORECAST_STATUS_ID"""  ""  ""
"FK5E6775D871226E5" "R" "SUB_GROUP_ID"  "GROUPS""ID""SAS"
"FK5E6775D87C3474A6""R" "PROGRAMME_ID"  "PROGRAMME_CFG" "ID""SAS"
"FK5E6775D88E98545F""R" "APPROVAL_CYCLE_ID" "APPROVAL_CYCLES"   
"ID""SAS"
"FK5E6775D896C4452F""R" "BUSINESS_IMPACT_ID"""  ""  ""
"FK5E6775D89A1530AE""R" "DIVISION_ID"   "DIVISIONS_CFG" "ID""SAS"
"FK5E6775D89F6A3DA5""R" "BUSINESS_JUSTIFICATION_ID" ""  ""  
""
"FK5E6775D8B85D0B4E""R" "INITIATIVE_ID" "INITIATIVE_CFG""ID"
"SAS"
"FK5E6775D8FC3CABC6""R" "GROUP_ID"  "GROUPS""ID""SAS"
"SYS_C0057558"  "P" "ID"""  ""  ""

(I hope this is readable, here's the same in Google docs:
https://spreadsheets.google.com/ccc?key=0AgdbG5HyoweVdGZiN2RnM1JMY3hDSGpuZ09CVVdsbkE&hl=en&authkey=CKqIrcsL)

I'm connecting to the database with the schema='SAS' parameter on all
Table objects. I've done the same thing in the past (connecting to
another schema I read from) in SQLAlchemy (same version, 0.5.5) but
unfortunately the different scripts are hard to compare as such. They
do look very much alike from how they connect and obtain schema
information (but you're often blind to differences in your own code).
I'll probably have to strip them down further in the end to track this
down but this error looks like something more fundamental may be
wrong.

Does anyone on this list have an idea what I could try to investigate
further? I hope I've included all the relevant infos in this (quite
long, sorry!) mail. Let me know if I can provide anything else!

Cheers,
Sven

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Oracle: "There are multiple tables visible..."

2009-06-24 Thread Sven A. Schmidt

> > 2. no synonym parameter, schema = 'AINV_OWNER' on Table(...)
> > Results in the error: "Could not determine join condition between
> > parent/child tables..." I had expected this to work but it seems that
> > for some reason SQLA doesn't see the constraint info on tables in
> > another user's schema. Is that a bug?
>
> that would be a bug, yes.   but, try specifying schema and all the names
> using lowercase characters (not ALL_UPPERCASE as you may be doing) - SQLA
> will ensure that it uses case insensitive identifiers (it converts to
> uppercase as needed when talking to oracle).  its possible that there is a
> mismatch between target names and specified names causing this issue.  or
> maybe the oracle dialect just doesn't interpret the "owner" part of a
> foreign key constraint correctly yet (im not easily able to test things
> like that with Oracle XE).

That did the trick!

Thanks a lot,
Sven

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle: "There are multiple tables visible..."

2009-06-23 Thread Sven A. Schmidt

Michael,

thanks for your reply! Unfortunately, I believe I have to use the
synonyms feature, because I cannot use the table owner's schema owner
to access the database. Everything works if I use the table owner's
schema (AINV_OWNER) but the db policies forbid this for production, so
I have to somehow get this to work from another account.

I should have elaborated on this in my initial mail -- I went through
the following combinations of options as I far as I am aware of them
to try and work around the problem:

1. no synonym parameter, no schema parameter on Table(...)
Results in the error: "Couldn't find any column information..." This
is obvious: AINV_USER doesn't own the tables and in the absence of the
schema parameter SQLA can't find any table info

2. no synonym parameter, schema = 'AINV_OWNER' on Table(...)
Results in the error: "Could not determine join condition between
parent/child tables..." I had expected this to work but it seems that
for some reason SQLA doesn't see the constraint info on tables in
another user's schema. Is that a bug? I could work around this but it
would mean I have to manually specify all relations which are
correctly read from the db if I connect with the table owner schema.
I'd like to avoid doing that, because I'm lazy ;)

3. synonym parameter, no schema parameter on Table(...)
Results in the error: "There are multiple tables visible..." As
described, this results from the _resolve_synonym call in base.py.

4. synonym parameter, schema = 'AINV_OWNER' on Table(...)
Results in the error: "Could not determine join condition between
parent/child tables..." See 2) above.

It seems I'm stuck between a rock and a hard place here ;)

Cheers,
Sven

On Jun 23, 5:03 pm, "Michael Bayer"  wrote:
> Sven A. Schmidt wrote:
>
> > Hi,
>
> > I've hit a problem very recently with autoloading of table info from
> > an oracle schema which I believe is caused by a problem inside the
> > _resolve_synonym method of oracle/base.py. I've googled around a bit
> > but didn't find this issue reported previously. It may well be a
> > problem with our db setup but I'm hoping folks on this list will be
> > able to shed some light on it either way :)
>
> > What's happening is that in the db there are two rows returned when
> > running the query
>
> > select OWNER, TABLE_OWNER, TABLE_NAME, SYNONYM_NAME from ALL_SYNONYMS
> > WHERE table_name = 'REQUESTS';
>
> > Result:
>
> > AINV_REPORT        AINV_OWNER      REQUESTS        REQUESTS
> > AINV_USER  AINV_OWNER      REQUESTS        REQUESTS
>
> > Inside _resolve_synonym this query is run and if len(rows) >1 the
> > following error is raised:
>
> > "There are multiple tables visible to the schema, you must specify
> > owner"
>
> > I was thinking that perhaps the query used inside _resolve_synonyms
> > could/should include a check on the synonym owner to exclude multiple
> > matches of the same table exists as a synonym in another user's
> > schema. Or would that break other things?
>
> this assumes that you need to use the resolve_synonyms feature in the first
> place (its off by default).   oracle_resolve_synonyms is probably not
> worth using if you aren't using DBLINK (and maybe not even if you are) -
> that was the original use case for it.   If you leave the feature off and
> just reflect "requests", the whole function won't get involved.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Oracle: "There are multiple tables visible..."

2009-06-23 Thread Sven A. Schmidt

Hi,

I've hit a problem very recently with autoloading of table info from
an oracle schema which I believe is caused by a problem inside the
_resolve_synonym method of oracle/base.py. I've googled around a bit
but didn't find this issue reported previously. It may well be a
problem with our db setup but I'm hoping folks on this list will be
able to shed some light on it either way :)

What's happening is that in the db there are two rows returned when
running the query

select OWNER, TABLE_OWNER, TABLE_NAME, SYNONYM_NAME from ALL_SYNONYMS
WHERE table_name = 'REQUESTS';

Result:

AINV_REPORT AINV_OWNER  REQUESTSREQUESTS
AINV_USER   AINV_OWNER  REQUESTSREQUESTS

Inside _resolve_synonym this query is run and if len(rows) >1 the
following error is raised:

"There are multiple tables visible to the schema, you must specify
owner"

I tried specifying the owner (AINV_USER) by using the schema parameter
of Tables() but that in turn causes errors because parent/child
relations cannot be automatically determined (which they can just fine
if the script runs from the table owner's schema). The error is:
"Could not determine join condition between parent/child tables on
relation..." It looks like synonyms don't make the foreign key
constraints visible, just the table names themselves (but that's just
a guess).

Since I rely heavily on autoload, I'd have to specify a lot of
primaryjoins for the relations if I used the schema parameter.

I was thinking that perhaps the query used inside _resolve_synonyms
could/should include a check on the synonym owner to exclude multiple
matches of the same table exists as a synonym in another user's
schema. Or would that break other things?

In the meantime I'm trying to get rid of the extra synonyms but I'm
not sure if that can always be avoided nor if that's too fragile
overall. In the end that would mean that by creating extra synonyms
for another user a working script could break. Or am I missing
something here?

Any insight greatly appreciated! :)

Cheers,
Sven

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Self Reference within Joined Inheritance

2009-05-23 Thread sven-eric

Dear Group,

I have a problem with setting up self referential relations within a
joined table inheritance scheme and declarative mapping. Let's say I
have a base class B with a derived class S. S has a self-referential
many-to-one relationship to B (and with that also to all of B's
derived classes). The declarative definition seems to compile fine if
a database system without native support for foreign keys is used
(like sqlite), but breaks down with systems like InnoDB on mysql where
foreign keys are supported natively (in the latter case I get a
"tables have more than one foreign key constraint relationship between
them. Please specify the 'onclause' of this join explicitly" error).

So, the following testcase works on sqlite but fails if I move to SQL
and InnoDB (after setting SQL server in the engine declaration and the
InnoDB table in the __table_args__ of each class). Could anyone give
me advice for setting up this kind of relation on InnoDB? Thanks a
lot.

-sven-eric



import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy import String, Column, Integer, ForeignKey,
ForeignKeyConstraint
from sqlalchemy.orm import relation, backref
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///:memory:', echo=False)
Base = declarative_base()

# B is the superclas of S
class B(Base):
__tablename__   = 'b'
id = Column(Integer, primary_key=True)
type = Column(String(1))
__mapper_args__ = {'polymorphic_on': type, 'polymorphic_identity':
'b'}

# S is a subclass of B and also references an object of the B
hierarchy via a foreign key
class S(B):
__tablename__   = 's'
__table_args__  = (ForeignKeyConstraint(['b_id'], ['b.id']))
__mapper_args__ = {'polymorphic_identity': 's'}
id = Column(Integer, ForeignKey('b.id'), primary_key=True)
b_id = Column(Integer)
b = relation('B', backref='s_collection', primaryjoin='S.b_id==B.id')

Base.metadata.create_all(engine)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphism and single table inheritance

2009-05-20 Thread sven-eric

Alright, I see. I appreciate the quick reply and thanks for all the
work you putting in answering these questions.

-sven

On May 20, 4:45 pm, "Michael Bayer"  wrote:
> sven-eric wrote:
>
> > Dear List,
>
> >      I have an issue with single table inheritance and sqlalchemy
> > version 0.5.2. Specifically, I want to disable polymorphic load when
> > querying over a set of objects that are within a single-table-
> > inheritance. The normal way of using 'with_polymorphic(cls)' does not
> > seem to work properly here (the three queries at the end of the
> > provided code all return the same list of objects).
>
> >      I would be grateful if someone could read the short piece of code
> > and give me advice for how to retrieve only the 'Jon Doe' object of
> > the base class with the query. Thanks a lot.
>
> > -sven
>
> > import sqlalchemy
> > from sqlalchemy import create_engine
> > from sqlalchemy.orm import mapper, relation
> > from sqlalchemy.orm import sessionmaker
> > from sqlalchemy import Table, Column, Integer, String, MetaData,
> > ForeignKey
>
> > engine = create_engine('sqlite:///:memory:', echo=False)
> > metadata = MetaData()
>
> > class Employee(object):
> >     def __init__(self, name):
> >         self.name = name
> >     def __repr__(self):
> >         return self.__class__.__name__ + " " + self.name
>
> > class Manager(Employee):
> >     def __init__(self, name, manager_data):
> >         self.name = name
> >         self.manager_data = manager_data
> >     def __repr__(self):
> >         return self.__class__.__name__ + " " + self.name + " " +
> > self.manager_data
>
> > class Engineer(Employee):
> >     def __init__(self, name, engineer_info):
> >         self.name = name
> >         self.engineer_info = engineer_info
> >     def __repr__(self):
> >         return self.__class__.__name__ + " " + self.name + " " +
> > self.engineer_info
>
> > employees = Table('employees', metadata,
> >     Column('employee_id', Integer, primary_key=True),
> >     Column('name', String(50)),
> >     Column('type', String(30), nullable=False)
> > )
>
> > engineers = Table('engineers', metadata,
> >    Column('employee_id', Integer, ForeignKey('employees.employee_id'),
> > primary_key=True),
> >    Column('engineer_info', String(50)),
> > )
>
> > managers = Table('managers', metadata,
> >    Column('employee_id', Integer, ForeignKey('employees.employee_id'),
> > primary_key=True),
> >    Column('manager_data', String(50)),
> > )
>
> > metadata.create_all(engine)
>
> > mapper(Employee, employees, polymorphic_on=employees.c.type,
> > polymorphic_identity='employee')
> > mapper(Engineer, engineers, inherits=Employee,
> > polymorphic_identity='engineer')
> > mapper(Manager, managers, inherits=Employee,
> > polymorphic_identity='manager')
>
> > b = Employee('Jon Doe')
> > m = Manager('Jay Smith', 'Bailout Money')
> > e = Manager('Zephran Cochran', 'Hammer')
>
> > Session = sessionmaker(bind=engine)
> > session = Session()
>
> > session.add(b)
> > session.add(m)
> > session.add(e)
> > session.commit()
>
> > print session.query(Employee).all()
> > print session.query(Employee).with_polymorphic(Employee).all()
> > print session.query(Employee).with_polymorphic('*').all()
>
> this example is not using single table inheritance, it is using joined.
> with_polymrphic() is not a filter, it is an optimization for how things
> should be loaded.   The fact is that if you ask for all Employee objects,
> you will get all the Employee objects which means all the Manager and
> Engineer objects as well.  If you'd to limit to only the base class, use
> filter(Employee.type=='employee').
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Polymorphism and single table inheritance

2009-05-20 Thread sven-eric

Dear List,

 I have an issue with single table inheritance and sqlalchemy
version 0.5.2. Specifically, I want to disable polymorphic load when
querying over a set of objects that are within a single-table-
inheritance. The normal way of using 'with_polymorphic(cls)' does not
seem to work properly here (the three queries at the end of the
provided code all return the same list of objects).

 I would be grateful if someone could read the short piece of code
and give me advice for how to retrieve only the 'Jon Doe' object of
the base class with the query. Thanks a lot.

-sven



import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import mapper, relation
from sqlalchemy.orm import sessionmaker
from sqlalchemy import Table, Column, Integer, String, MetaData,
ForeignKey

engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()

class Employee(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.__class__.__name__ + " " + self.name

class Manager(Employee):
def __init__(self, name, manager_data):
self.name = name
self.manager_data = manager_data
def __repr__(self):
return self.__class__.__name__ + " " + self.name + " " +
self.manager_data

class Engineer(Employee):
def __init__(self, name, engineer_info):
self.name = name
self.engineer_info = engineer_info
def __repr__(self):
return self.__class__.__name__ + " " + self.name + " " +
self.engineer_info

employees = Table('employees', metadata,
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('type', String(30), nullable=False)
)

engineers = Table('engineers', metadata,
   Column('employee_id', Integer, ForeignKey('employees.employee_id'),
primary_key=True),
   Column('engineer_info', String(50)),
)

managers = Table('managers', metadata,
   Column('employee_id', Integer, ForeignKey('employees.employee_id'),
primary_key=True),
   Column('manager_data', String(50)),
)

metadata.create_all(engine)

mapper(Employee, employees, polymorphic_on=employees.c.type,
polymorphic_identity='employee')
mapper(Engineer, engineers, inherits=Employee,
polymorphic_identity='engineer')
mapper(Manager, managers, inherits=Employee,
polymorphic_identity='manager')

b = Employee('Jon Doe')
m = Manager('Jay Smith', 'Bailout Money')
e = Manager('Zephran Cochran', 'Hammer')

Session = sessionmaker(bind=engine)
session = Session()

session.add(b)
session.add(m)
session.add(e)
session.commit()

print session.query(Employee).all()
print session.query(Employee).with_polymorphic(Employee).all()
print session.query(Employee).with_polymorphic('*').all()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---