[sqlalchemy] Re: Get the row count from a query

2022-06-06 Thread Jason Hoppes
Thank you Sergey.

On Saturday, June 4, 2022 at 7:08:25 AM UTC-4 Sergey V. wrote:

> An alternative would be
>
> session.query(User).filter(User.id_ == id).count()
>
> On Saturday, June 4, 2022 at 4:45:49 AM UTC+10 jason@stormfish-sci.com 
> wrote:
>
>> I believe I finally found a solution:
>>
>>  select(func.count(User.id_)).where(User.id_ == id)
>>
>> Thank you for taking the time to look.
>>
>> On Friday, June 3, 2022 at 1:45:39 PM UTC-4 Jason Hoppes wrote:
>>
>>> I would like to generate the following SQL using the ORM. I am testing 
>>> for no rows returning:
>>>
>>> select count(*) from users where id = 
>>>
>>> I found session.query(User.id_).count() however that does not have a 
>>> where clause. How would this be performed in the ORM with a where clause?
>>>
>>> Thanks,
>>>
>>> Jason
>>>
>>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/15c3af94-ec76-4600-bca7-c6ba6cc09a38n%40googlegroups.com.


[sqlalchemy] Re: Get the row count from a query

2022-06-03 Thread Jason Hoppes
I believe I finally found a solution:

 select(func.count(User.id_)).where(User.id_ == id)

Thank you for taking the time to look.

On Friday, June 3, 2022 at 1:45:39 PM UTC-4 Jason Hoppes wrote:

> I would like to generate the following SQL using the ORM. I am testing for 
> no rows returning:
>
> select count(*) from users where id = 
>
> I found session.query(User.id_).count() however that does not have a 
> where clause. How would this be performed in the ORM with a where clause?
>
> Thanks,
>
> Jason
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/403c6775-1747-45ad-bb0e-cf712697e625n%40googlegroups.com.


[sqlalchemy] Get the row count from a query

2022-06-03 Thread Jason Hoppes
I would like to generate the following SQL using the ORM. I am testing for 
no rows returning:

select count(*) from users where id = 

I found session.query(User.id_).count() however that does not have a where 
clause. How would this be performed in the ORM with a where clause?

Thanks,

Jason

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/abc35f82-70a3-43e2-b300-2a31eddabfbfn%40googlegroups.com.


[sqlalchemy] Re: sqlalchemy.exc.UnboundExecutionError

2022-05-17 Thread Jason Hoppes
I found my error It was in the logic. Thank you for taking the time to read.

- Jason

On Tuesday, May 17, 2022 at 4:51:49 PM UTC-4 Jason Hoppes wrote:

> I am getting an exception:
>
> sqlalchemy.exc.UnboundExecutionError: Could not locate a bind configured 
> on mapper mapped class User->users, SQL expression or this Session.
>
> My engine creation code is in a static method of a class and is as follows:
>
> db_url = 'postgresql+psycopg2://myuser:'\
>  'mypw@localhost:5432/'\
>  'mydb'
> cls.engine = create_engine(db_url, echo=False)
>
> at the module level I have this function:
>
> def add_user(username, password,
>  active=True, by=1,
>  uid=None):
> """Create a User and add it to the database."""
> with Session(DatabaseORM.get_engine()) as session:
> if uid is not None:
> user = User(username=username,
> password=password,
> active=active,
> created_by=by,
> uid=uid)
> else:
> user = User(username=username,
> password=password,
> active=active,
> created_by=by)
>
> session.add(user)
>
> session.commit()
> id_ = user.id_
>
> return id_
>
> I tried adding:
>
> session = Session.configure(bind=self.engine) 
>
> in the with block but it told me configure is not an attribute of session.
>
> Any help would be appreciated thank you.
>
> - Jason
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/d708f89a-c84d-4d32-87f5-a70d44419fd0n%40googlegroups.com.


[sqlalchemy] sqlalchemy.exc.UnboundExecutionError

2022-05-17 Thread Jason Hoppes
I am getting an exception:

sqlalchemy.exc.UnboundExecutionError: Could not locate a bind configured on 
mapper mapped class User->users, SQL expression or this Session.

My engine creation code is in a static method of a class and is as follows:

db_url = 'postgresql+psycopg2://myuser:'\
 'mypw@localhost:5432/'\
 'mydb'
cls.engine = create_engine(db_url, echo=False)

at the module level I have this function:

def add_user(username, password,
 active=True, by=1,
 uid=None):
"""Create a User and add it to the database."""
with Session(DatabaseORM.get_engine()) as session:
if uid is not None:
user = User(username=username,
password=password,
active=active,
created_by=by,
uid=uid)
else:
user = User(username=username,
password=password,
active=active,
created_by=by)

session.add(user)

session.commit()
id_ = user.id_

return id_

I tried adding:

session = Session.configure(bind=self.engine) 

in the with block but it told me configure is not an attribute of session.

Any help would be appreciated thank you.

- Jason

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/51b21161-dcd5-46f9-82bb-ec0480bfb170n%40googlegroups.com.


Re: [sqlalchemy] Session within a session

2022-04-29 Thread Jason Hoppes
Sorry I got diverted on another project. I will try this later and let you 
know. Thanks for your help.

- Jason

On Wednesday, April 20, 2022 at 8:45:39 PM UTC-4 Mike Bayer wrote:

> you can work with multiple sessions and their objects simultaneously, with 
> the provision that you don't add() an object from one session directly into 
> the other one, without detaching it from the original session first.  to 
> transfer the state of an object from one session to another without 
> detaching it, use the session.merge() method.
>
> more important is that when you work with these sessions, the sessions are 
> not being simultaneously accessed by other concurrent tasks such as those 
> in other threads, greenlets or async coroutines.  if the two sessions are 
> local to the current task/thread/whatever, just keep track of which objects 
> are from which and it will be fine.
>
> On Wed, Apr 20, 2022, at 11:28 AM, Jason Hoppes wrote:
>
> I have an object that adds a user. I have another Object that gets cipher 
> information for that user's password. Each one having their own session. 
> Could I call the object that gets the cipher with a separate session with 
> the session that I add the user? Effectively it does a lookup using an 
> inner session.
>
> Thanks in advance for your help.
>
> - Jason
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/dea3c02c-1dce-4496-9b6d-11b6a7b0bafbn%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/dea3c02c-1dce-4496-9b6d-11b6a7b0bafbn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a6b0d488-cbcc-42e8-80a5-19ec67cc4f88n%40googlegroups.com.


Re: [sqlalchemy] query many-many with asssociation table

2022-04-29 Thread Jason Hoppes
Thanks Simon this worked.

- Jason

On Wednesday, April 6, 2022 at 6:26:51 AM UTC-4 Simon King wrote:

> I think it should work if you join to the *relationship* explicitly
>
> ie.
>
> session.query(User).join(User.user_groups).filter(...)
>
> Hope that helps,
>
> Simon
>
>
> On Tue, Apr 5, 2022 at 9:48 PM Jason Hoppes  
> wrote:
>
>> I want to select all users in a particular group. I have a users table, 
>> user_groups table, and a users_group_users_asc table to associate the two. 
>> Note this is not a self referencing relationship there are three different 
>> tables involved not two like the example in the documentation. I have the 
>> following configuration:
>>
>> user_groups_users = Table('user_groups_users_asc', Base.metadata,
>>   Column('user_group_id', ForeignKey('
>> user_groups.id', ondelete='CASCADE')),
>>   Column('user_id', ForeignKey('users.id', 
>> ondelete='CASCADE'))
>> )
>>
>> class User(Base):
>> __tablename__ = 'users'
>> id_ = Column('id', BigInteger, primary_key=True)
>> username = Column('username', String(255))
>> user_groups = relationship('UserGroup',
>>secondary=user_groups_users,
>>back_populates='users)
>>
>> class UserGroup(Base):
>> __tablename__ = 'user_groups'
>> id_ = Column('id', BigInteger, primary_key=True)
>> group_name = Column('group_name', String(255), nullable=False)
>> description = Column('description', Text)
>> users = relationship('User',
>>  secondary=user_groups_users,
>>  back_populates='user_groups',
>>  passive_deletes=True)
>>
>> As I suspected the following query gives me an error:
>>
>> session.query(User).join(UserGroup).filter(UserGroup.group_name == 
>> grp_name).all()
>>
>> Don't know how to join to . Please 
>> use the .select_from() method to establish an explicit left side, as well 
>> as providing an explicit ON clause if not present already to help resolve 
>> the ambiguity.
>>
>> Thank you in advance for your help.
>>
>> - Jason
>>
>> -- 
>> SQLAlchemy - 
>> The Python SQL Toolkit and Object Relational Mapper
>>  
>> http://www.sqlalchemy.org/
>>  
>> To post example code, please provide an MCVE: Minimal, Complete, and 
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
>> description.
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/sqlalchemy/43edd22c-7eca-427b-907e-57e20d665f6en%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/sqlalchemy/43edd22c-7eca-427b-907e-57e20d665f6en%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/f9804fdc-2e9e-42b2-8de0-529d7b9ff7f7n%40googlegroups.com.


Re: [sqlalchemy] query many-many with asssociation table

2022-04-29 Thread Hoppes, Jason
Thank you Simon you solved my problem


On Wed, Apr 6, 2022 at 6:26 AM Simon King  wrote:

> I think it should work if you join to the *relationship* explicitly
>
> ie.
>
> session.query(User).join(User.user_groups).filter(...)
>
> Hope that helps,
>
> Simon
>
>
> On Tue, Apr 5, 2022 at 9:48 PM Jason Hoppes <
> jason.hop...@stormfish-sci.com> wrote:
>
>> I want to select all users in a particular group. I have a users table,
>> user_groups table, and a users_group_users_asc table to associate the two.
>> Note this is not a self referencing relationship there are three different
>> tables involved not two like the example in the documentation. I have the
>> following configuration:
>>
>> user_groups_users = Table('user_groups_users_asc', Base.metadata,
>>   Column('user_group_id', ForeignKey('
>> user_groups.id', ondelete='CASCADE')),
>>   Column('user_id', ForeignKey('users.id',
>> ondelete='CASCADE'))
>> )
>>
>> class User(Base):
>> __tablename__ = 'users'
>> id_ = Column('id', BigInteger, primary_key=True)
>> username = Column('username', String(255))
>> user_groups = relationship('UserGroup',
>>secondary=user_groups_users,
>>back_populates='users)
>>
>> class UserGroup(Base):
>> __tablename__ = 'user_groups'
>> id_ = Column('id', BigInteger, primary_key=True)
>> group_name = Column('group_name', String(255), nullable=False)
>> description = Column('description', Text)
>> users = relationship('User',
>>  secondary=user_groups_users,
>>  back_populates='user_groups',
>>  passive_deletes=True)
>>
>> As I suspected the following query gives me an error:
>>
>> session.query(User).join(UserGroup).filter(UserGroup.group_name ==
>> grp_name).all()
>>
>> Don't know how to join to . Please
>> use the .select_from() method to establish an explicit left side, as well
>> as providing an explicit ON clause if not present already to help resolve
>> the ambiguity.
>>
>> Thank you in advance for your help.
>>
>> - Jason
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/sqlalchemy/43edd22c-7eca-427b-907e-57e20d665f6en%40googlegroups.com
>> <https://groups.google.com/d/msgid/sqlalchemy/43edd22c-7eca-427b-907e-57e20d665f6en%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/iMWS9o8vcmw/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAFHwexdfe3cz%2BFxiLLzAHhAswb%2BY1WEKf8ACxqYoDYU5qExK5g%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAFHwexdfe3cz%2BFxiLLzAHhAswb%2BY1WEKf8ACxqYoDYU5qExK5g%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAEkvBWURKWWdEg%2BoREdfxMAXWAf0dhJt67YQev%2BQ6BngQrdawQ%40mail.gmail.com.


[sqlalchemy] Session within a session

2022-04-20 Thread Jason Hoppes
I have an object that adds a user. I have another Object that gets cipher 
information for that user's password. Each one having their own session. 
Could I call the object that gets the cipher with a separate session with 
the session that I add the user? Effectively it does a lookup using an 
inner session.

Thanks in advance for your help.

- Jason

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/dea3c02c-1dce-4496-9b6d-11b6a7b0bafbn%40googlegroups.com.


[sqlalchemy] query many-many with asssociation table

2022-04-05 Thread Jason Hoppes
I want to select all users in a particular group. I have a users table, 
user_groups table, and a users_group_users_asc table to associate the two. 
Note this is not a self referencing relationship there are three different 
tables involved not two like the example in the documentation. I have the 
following configuration:

user_groups_users = Table('user_groups_users_asc', Base.metadata,
  Column('user_group_id', 
ForeignKey('user_groups.id', ondelete='CASCADE')),
  Column('user_id', ForeignKey('users.id', 
ondelete='CASCADE'))
)

class User(Base):
__tablename__ = 'users'
id_ = Column('id', BigInteger, primary_key=True)
username = Column('username', String(255))
user_groups = relationship('UserGroup',
   secondary=user_groups_users,
   back_populates='users)

class UserGroup(Base):
__tablename__ = 'user_groups'
id_ = Column('id', BigInteger, primary_key=True)
group_name = Column('group_name', String(255), nullable=False)
description = Column('description', Text)
users = relationship('User',
 secondary=user_groups_users,
 back_populates='user_groups',
 passive_deletes=True)

As I suspected the following query gives me an error:

session.query(User).join(UserGroup).filter(UserGroup.group_name == 
grp_name).all()

Don't know how to join to . Please use 
the .select_from() method to establish an explicit left side, as well as 
providing an explicit ON clause if not present already to help resolve the 
ambiguity.

Thank you in advance for your help.

- Jason

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/43edd22c-7eca-427b-907e-57e20d665f6en%40googlegroups.com.


Example of inline autogenerated use?

2020-11-04 Thread Jason Pascucci
I couldn't lay my hands on an example of using 
MigrationContext(connect=) and running the results of 
produce_migrations(Base.metadata, mc). 

I got close, I think, evaling the results of render_python_code (which 
failed due to spaces, but I can wrap it into a def, I guess), but I think 
there's got to be a better way, I'm just not seeing it.

(NB: I can guarantee that the migrations will always be simple alters to 
add only, , so not using the whole infrastructure lets it be consistent, 
and due to the nature of the project, I'd like to make it simple)

Thanks

JRP

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy-alembic/7d7499f6-ce53-4aa3-97f5-20d5c25cc41en%40googlegroups.com.


[sqlalchemy] Re: Querying an Association Proxy using an enum column in the intermediate table

2019-04-22 Thread Jason Armstrong
Here is an update to the original that now works, implementing Mike's 
solution of a single table inheritance for the association object.

Thank you again!


from sqlalchemy import Column, ForeignKey, Enum, Text, Integer, 
create_engine
from sqlalchemy.orm import relationship, backref, sessionmaker, 
scoped_session
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from enum import Enum as eEnum

Base = declarative_base()

class WorkerType(eEnum):
CONTRACTOR = 0
EMPLOYEE = 1
PART_TIME = 2

class Employer(Base):
__tablename__ = "employer"

id = Column(Integer, primary_key=True)
name = Column(Text)

contractor = association_proxy(
"employer_workers_contractor", "worker",
creator=lambda el:EmployerWorkerContractor(worker=el)
)

employee = association_proxy(
"employer_workers_employee", "worker",
creator=lambda el:EmployerWorkerEmployee(worker=el)
)

part_time = association_proxy(
"employer_workers_part_time", "worker",
creator=lambda el:EmployerWorkerPartTime(worker=el)
)


class EmployerWorkerAssociation(Base):
__tablename__ = "employer_to_worker"

employer_id = Column(ForeignKey("employer.id"), primary_key=True)
worker_id = Column(ForeignKey("worker.id"), primary_key=True)
worker_type = Column(Enum(WorkerType), primary_key=True)

__mapper_args__ = {
"polymorphic_identity": "EmployerWorkerAssociation",
"polymorphic_on": "worker_type"
}


class EmployerWorkerContractor(EmployerWorkerAssociation):
employer = relationship(
"Employer",
backref=backref("employer_workers_contractor", cascade="all, 
delete-orphan")
)

worker = relationship(
"Worker",
backref=backref("worker_employers_contractor")
)

__mapper_args__ = {"polymorphic_identity": WorkerType.CONTRACTOR}


class EmployerWorkerEmployee(EmployerWorkerAssociation):
employer = relationship(
"Employer",
backref=backref("employer_workers_employee", cascade="all, 
delete-orphan")
)

worker = relationship(
"Worker",
backref=backref("worker_employers_employee")
)

__mapper_args__ = {"polymorphic_identity": WorkerType.EMPLOYEE}


class EmployerWorkerPartTime(EmployerWorkerAssociation):
employer = relationship(
"Employer",
backref=backref("employer_workers_part_time", cascade="all, 
delete-orphan")
)

worker = relationship(
"Worker",
backref=backref("worker_employers_part_time")
)

__mapper_args__ = {"polymorphic_identity": WorkerType.PART_TIME}




class Worker(Base):
__tablename__ = "worker"
id = Column(Integer, primary_key=True)
name = Column(Text)

contractor = association_proxy(
"worker_employers_contractor", "employer",
creator=lambda el:EmployerWorkerContractor(employer=el)
)

employee = association_proxy(
"worker_employers_employee", "employer",
creator=lambda el:EmployerWorkerEmployee(employer=el)
)

part_time = association_proxy(
"worker_employers_part_time", "employer",
creator=lambda el:EmployerWorkerPartTime(employer=el)
)



engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = scoped_session(Session)
e1 = Employer(name="The Company")
session.add(e1)
session.commit()

w1 = Worker(name="The Programmer")
session.add(w1)
session.commit()

e1.contractor.append(w1)
session.add(e1)
session.commit()

print(f"Contractors: {len(e1.contractor)}")
print(f"Employees  : {len(e1.employee)}")
print(f"Part Timers: {len(e1.part_time)}")

e2 = Employer(name="The Enterprise")
session.add(e2)

w1.part_time.append(e2)
session.commit()

print(f"worker   {w1.name} c={len(w1.contractor)} e={len(w1.employee)} 
p={len(w1.part_time)}")
print(f"employer {e1.name} c={len(e1.contractor)} e={len(e1.employee)} 
p={len(e1.part_time)}")
print(f"employer {e2.name} c={len(e2.contractor)} e={len(e2.employee)} 
p={len(e2.part_time)}")



-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Querying an Association Proxy using an enum column in the intermediate table

2019-04-22 Thread Jason Armstrong
On Mon, 22 Apr 2019 at 20:10, Mike Bayer  wrote:

> So the architecture here should instead use three relationships, and to
make it automatically handle the enumeration you can use a single table
inheritance pattern on your association object, meaning you would have
ContractorAssociation, EmployeeAssociation, and PartTimeAssociation.   You
would find things work pretty automatically in that style.  alternatively
you can use custom join conditions on each relationship. There are even
ways you can programmatically build up these classes and relationships
based on the items available in the enum, if that were a concern.

Thanks for the tip. I thought I might be going about it the wrong way, and
I'll have a go at implementing your suggestion.


Regards,

--
Jason Armstrong

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Querying an Association Proxy using an enum column in the intermediate table

2019-04-21 Thread Jason Armstrong
Hello everyone,

I am modelling a relationship between two groups which is a many-to-many 
relationship, with an additional constraint.
I have an Employer and a Worker class. Employers can have a relationship to 
a worker in a number of ways, which
I'm representing by an enum: Contractor, Employee, Part-Time etc.
The join table has three columns, employer_id, worker_id and worker_type, 
which is the enum value.

The below code works for creating the relationship, but I don't quite 
understand how to query it.

I create an employer, and then a worker.
Then I use
employer.contractor.append(worker)

The join table contains the correct ids and the 'CONTRACTOR' enum value.
But when I iterate through employer.part_time, I get back the same worker 
who is a 'CONTRACTOR'. I'm expecting that only
employer.contractor should contain values, and that employer.employee and 
employer.part_time should be empty.

Is there something corresponding to *creator* that can be used when 
retrieving the values?
Or, what is the best way for me to get back the correct values? Do I need 
to issue a query, and with what parameters?


from sqlalchemy import Column, ForeignKey, Enum, Text, Integer, 
create_engine
from sqlalchemy.orm import relationship, backref, sessionmaker, 
scoped_session
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from enum import Enum as eEnum


Base = declarative_base()

class WorkerType(eEnum):
CONTRACTOR = 0
EMPLOYEE = 1
PART_TIME = 2

class Employer(Base):
__tablename__ = "employer"

id = Column(Integer, primary_key=True)
name = Column(Text)

contractor = association_proxy(
"employer_workers", "worker",
creator=lambda el:EmployerWorkerAssociation(
worker=el,
worker_type=WorkerType.CONTRACTOR
)
)

employee = association_proxy(
"employer_workers", "worker",
creator=lambda el:EmployerWorkerAssociation(
worker=el,
worker_type=WorkerType.EMPLOYEE
)
)

part_time = association_proxy(
"employer_workers", "worker",
creator=lambda el:EmployerWorkerAssociation(
worker=el,
worker_type=WorkerType.PART_TIME
)
)


class EmployerWorkerAssociation(Base):
__tablename__ = "employer_to_worker"

employer_id = Column(ForeignKey("employer.id"), primary_key=True)
worker_id = Column(ForeignKey("worker.id"), primary_key=True)
worker_type = Column(Enum(WorkerType), primary_key=True)

employer = relationship(
"Employer",
backref=backref("employer_workers", cascade="all, delete-orphan")
)

worker = relationship(
"Worker",
backref=backref("worker_employers")
)

class Worker(Base):
__tablename__ = "worker"
id = Column(Integer, primary_key=True)
name = Column(Text)



engine = create_engine("sqlite://")
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = scoped_session(Session)

e1 = Employer(name="The Company")
session.add(e1)
session.commit()

w1 = Worker(name="The Programmer")
session.add(w1)
session.commit()

e1.contractor.append(w1)
session.add(e1) 
  44,0-1
81%
session.commit()

# I'm expecting contractor=1 employee=0 part_time=0
print(f"Contractors: {len(e1.contractor)}")
print(f"Employees  : {len(e1.employee)}")
print(f"Part Timers: {len(e1.part_time)}")

# There shouldn't be anyone 'PART_TIME'
for worker in e1.part_time:
print(f"{worker.id} {worker.name}")



Thank you for your time.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: GlobalFilter recipe

2017-10-31 Thread Jason Gosnell
Minor correction to the above:

I observed that in 1.0.0+, TemporalOption.*process_query_conditionally* is 
never called (hence the None/None for temporal_lower/temporal_upper in the 
resulting query).

Jason

On Tuesday, October 31, 2017 at 9:47:39 PM UTC-7, Jason Gosnell wrote:
>
> Hi folks,
>
> Is the GlobalFilter recipe expected to still work with newer versions of 
> sqlalchemy? I've copied and pasted the code right out of the recipe and get 
> this assertion:
>
> 2017-10-31 21:38:22,391 INFO sqlalchemy.engine.base.Engine SELECT 
> parent.id AS parent_id, parent.timestamp AS parent_timestamp, child_1.id 
> AS child_1_id, child_1.parent_id AS child_1_parent_id, child_1.timestamp AS 
> child_1_timestamp 
> FROM parent LEFT OUTER JOIN child AS child_1 ON parent.id = 
> child_1.parent_id AND child_1.timestamp BETWEEN ? AND ? 
> WHERE parent.timestamp BETWEEN ? AND ?
> 2017-10-31 21:38:22,391 INFO sqlalchemy.engine.base.Engine ('2009-10-16 
> 12:00:00.00', '2009-10-18 12:00:00.00', '2009-10-16 
> 12:00:00.00', '2009-10-18 12:00:00.00')
> 2017-10-31 21:38:22,392 INFO sqlalchemy.engine.base.Engine SELECT 
> parent.id AS parent_id, parent.timestamp AS parent_timestamp 
> FROM parent 
> WHERE parent.timestamp BETWEEN ? AND ?
> 2017-10-31 21:38:22,392 INFO sqlalchemy.engine.base.Engine ('2009-10-16 
> 12:00:00.00', '2009-10-18 12:00:00.00')
> 2017-10-31 21:38:22,393 INFO sqlalchemy.engine.base.Engine SELECT child.id 
> AS child_id, child.parent_id AS child_parent_id, child.timestamp AS 
> child_timestamp 
> FROM child 
> WHERE ? = child.parent_id AND child.timestamp BETWEEN ? AND ?
> 2017-10-31 21:38:22,393 INFO sqlalchemy.engine.base.Engine (2, None, None)
> Traceback (most recent call last):
>   File "globalfilter_original.py", line 114, in 
> assert parents[0].temporal_children == [c5]
> AssertionError
>
> I backtracked to sqlalchemy 0.9.10 to confirmed it works as advertised, 
> and then rolled forward to 1.0.0 to confirmed it fails.
>
> I also observed that in 1.0.0+, TemporalOption.process_query is never 
> called.
>
> Thanks,
> Jason
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] GlobalFilter recipe

2017-10-31 Thread Jason Gosnell
Hi folks,

Is the GlobalFilter recipe expected to still work with newer versions of 
sqlalchemy? I've copied and pasted the code right out of the recipe and get 
this assertion:

2017-10-31 21:38:22,391 INFO sqlalchemy.engine.base.Engine SELECT parent.id 
AS parent_id, parent.timestamp AS parent_timestamp, child_1.id AS 
child_1_id, child_1.parent_id AS child_1_parent_id, child_1.timestamp AS 
child_1_timestamp 
FROM parent LEFT OUTER JOIN child AS child_1 ON parent.id = 
child_1.parent_id AND child_1.timestamp BETWEEN ? AND ? 
WHERE parent.timestamp BETWEEN ? AND ?
2017-10-31 21:38:22,391 INFO sqlalchemy.engine.base.Engine ('2009-10-16 
12:00:00.00', '2009-10-18 12:00:00.00', '2009-10-16 
12:00:00.00', '2009-10-18 12:00:00.00')
2017-10-31 21:38:22,392 INFO sqlalchemy.engine.base.Engine SELECT parent.id 
AS parent_id, parent.timestamp AS parent_timestamp 
FROM parent 
WHERE parent.timestamp BETWEEN ? AND ?
2017-10-31 21:38:22,392 INFO sqlalchemy.engine.base.Engine ('2009-10-16 
12:00:00.00', '2009-10-18 12:00:00.00')
2017-10-31 21:38:22,393 INFO sqlalchemy.engine.base.Engine SELECT child.id 
AS child_id, child.parent_id AS child_parent_id, child.timestamp AS 
child_timestamp 
FROM child 
WHERE ? = child.parent_id AND child.timestamp BETWEEN ? AND ?
2017-10-31 21:38:22,393 INFO sqlalchemy.engine.base.Engine (2, None, None)
Traceback (most recent call last):
  File "globalfilter_original.py", line 114, in 
assert parents[0].temporal_children == [c5]
AssertionError

I backtracked to sqlalchemy 0.9.10 to confirmed it works as advertised, and 
then rolled forward to 1.0.0 to confirmed it fails.

I also observed that in 1.0.0+, TemporalOption.process_query is never 
called.

Thanks,
Jason

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Help with ORM join syntax

2017-04-03 Thread Jason T.
Yes, I had read the documentation. I am just new to SQL and databases in 
general. I think there may be even a cleaner way to do this since I have 
relationships() built into the models, but since the tables have multiple 
foreign keys I think I had to specify the "on clause" anyhow. Thanks for 
the help!

For the record, the final ORM level calls:

bus1_alias = aliased(sam.Bus)
bus2_alias = aliased(sam.Bus)

branch_db = self.db_hook.session.query(sam.Branch). \
join(bus1_alias, and_(bus1_alias.id==sam.Branch.id_from_bus, 
bus1_alias.id_model==sam.Branch.id_model)). \
join(bus2_alias, and_(bus2_alias.id==sam.Branch.id_to_bus, 
bus2_alias.id_model==sam.Branch.id_model)). \
filter(sam.Branch.ckt == ckt). \
filter(sam.Branch.id_model == case.id). \
filter(or_(and_(bus1_alias.number == bus1, bus2_alias.number == bus2),
   and_(bus1_alias.number == bus2, bus2_alias.number == bus1)))



On Monday, April 3, 2017 at 4:38:14 PM UTC-5, Mike Bayer wrote:
>
>
> Have you read the tutorial at 
> http://docs.sqlalchemy.org/en/rel_1_1/orm/tutorial.html#querying-with-joins 
>   and read all the examples at 
>
> http://docs.sqlalchemy.org/en/rel_1_1/orm/query.html#sqlalchemy.orm.query.Query.join?
>  
>
> There's nothing odd about the joins you're trying to build here. 
> You've got the aliased(), you have the and_/or_, and all the conditions. 
>   Just need to move some of those filter() conditions into 
> query.join(bus1_alias, ). 
>
>
>
>
>
> On 04/03/2017 04:53 PM, Jason T. wrote: 
> > Okay. I figured out how to use the ORM without joins, but I still can't 
> > figure out how to use the joins. :( 
> > 
> > bus1_alias = aliased(sam.Bus) 
> > bus2_alias = aliased(sam.Bus) 
> > 
> > branch_db = self.db_hook.session.query(sam.Branch). \ 
> > filter(sam.Branch.id_model == sam.Bus.id_model). \ 
> > filter(sam.Branch.ckt == ckt). \ 
> > filter(sam.Branch.id_model == case.id). \ 
> > filter(or_(and_(sam.Branch.id_from_bus == bus1_alias.id, 
> sam.Branch.id_to_bus == bus2_alias.id), 
> >and_(sam.Branch.id_from_bus == bus1_alias.id, 
> sam.Branch.id_to_bus == bus2_alias.id))). \ 
> > filter(bus1_alias.number == bus1). \ 
> > filter(bus2_alias.number == bus2).first() 
> > 
> > 
> > 
> > 
> > On Monday, April 3, 2017 at 1:29:49 PM UTC-5, Jason T. wrote: 
> > 
> > All, 
> > 
> > I am able to join two tables and get the desired result with the 
> > below SQL syntax; however, I am having trouble translating this SQL 
> > to the ORM join syntax.  Any help will be appreciated. 
> > 
> > SELECT * 
> > FROM raw.branch as b 
> > JOIN raw.bus AS bus1 
> > ON bus1.id = b.id_from_bus 
> > AND bus1.id_model = b.id_model 
> > JOIN raw.bus AS bus2 
> > ON bus2.id = b.id_to_bus 
> > AND bus2.id_model = b.id_model 
> > WHERE b.id_model = 10 
> > AND b.ckt = '1' 
> > AND ((bus1.number = 510417 AND bus2.number = 547486) OR 
> > (bus2.number = 510417 AND bus1.number = 547486)) 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Help with ORM join syntax

2017-04-03 Thread Jason T.
Okay. I figured out how to use the ORM without joins, but I still can't 
figure out how to use the joins. :(

bus1_alias = aliased(sam.Bus)
bus2_alias = aliased(sam.Bus)

branch_db = self.db_hook.session.query(sam.Branch). \
filter(sam.Branch.id_model == sam.Bus.id_model). \
filter(sam.Branch.ckt == ckt). \
filter(sam.Branch.id_model == case.id). \
filter(or_(and_(sam.Branch.id_from_bus == bus1_alias.id, 
sam.Branch.id_to_bus == bus2_alias.id),
   and_(sam.Branch.id_from_bus == bus1_alias.id, 
sam.Branch.id_to_bus == bus2_alias.id))). \
filter(bus1_alias.number == bus1). \
filter(bus2_alias.number == bus2).first()




On Monday, April 3, 2017 at 1:29:49 PM UTC-5, Jason T. wrote:
>
> All,
>
> I am able to join two tables and get the desired result with the below SQL 
> syntax; however, I am having trouble translating this SQL to the ORM join 
> syntax.  Any help will be appreciated.  
>
> SELECT *
> FROM raw.branch as b
> JOIN raw.bus AS bus1
>   ON bus1.id = b.id_from_bus
>   AND bus1.id_model = b.id_model
> JOIN raw.bus AS bus2
>   ON bus2.id = b.id_to_bus
>   AND bus2.id_model = b.id_model
> WHERE b.id_model = 10
>   AND b.ckt = '1'
>   AND ((bus1.number = 510417 AND bus2.number = 547486) OR
>(bus2.number = 510417 AND bus1.number = 547486))
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Help with ORM join syntax

2017-04-03 Thread Jason T.
All,

I am able to join two tables and get the desired result with the below SQL 
syntax; however, I am having trouble translating this SQL to the ORM join 
syntax.  Any help will be appreciated.  

SELECT *
FROM raw.branch as b
JOIN raw.bus AS bus1
  ON bus1.id = b.id_from_bus
  AND bus1.id_model = b.id_model
JOIN raw.bus AS bus2
  ON bus2.id = b.id_to_bus
  AND bus2.id_model = b.id_model
WHERE b.id_model = 10
  AND b.ckt = '1'
  AND ((bus1.number = 510417 AND bus2.number = 547486) OR
   (bus2.number = 510417 AND bus1.number = 547486))

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Default behavior for sqlalchemy_utils.UUIDType

2016-08-02 Thread Jason Libbey
I see, so I didn't quite understand the relationship between sqlalchemy and 
the sqlalchemy_utils libraries, but I think I've got it all straight now. 
 To answer your question as to why I would need the dashes, it's because 
the other dependencies expect to be able to look up uuid by the standard 
format.  At any rate, I've got it fixed now, thanks so much Mike!

On Friday, July 29, 2016 at 6:45:01 PM UTC-7, Jason Libbey wrote:
>
> Hi, this is my first post, so if it does not adhere by your guidelines, 
> please let me know and I'll fix it ASAP.
>
> I'm using the sqlalchemy_utils.UUIDType as per backend-agnostic-guid-type 
> <http://docs.sqlalchemy.org/en/latest/core/custom_types.html#backend-agnostic-guid-type>
> .
>
> from sqlalchemy_utils import UUIDType
>
>
>
> class ApplicationStore(Base, Timestamp):
>
> __tablename__ = 'applications'
>
> id = Column(UUIDType(binary=False), primary_key=True, default=uuid.uuid4)
>
> user_uuid = Column(UUIDType(binary=False), unique=True, index=True, 
> nullable=False)
>
>
>
> Since I am using a mysql db, the UUIDType is correctly falling back to the 
> CHAR32 representation.  However, the value that gets written to the db is 
> the uuid with no dashes.
>
>
> application = application_service.create(user_uuid=uuid.uuid4())
>
>
>
> The string representation of the uuid shows it with dashes, but the 
> database char is saved without dashes.  So my questions are:
> 1. Am I using this correctly?
> 2. Is this the expected behavior?
> 3. How can I force it to save as uuid with dashes, while still using the 
> backend agnostic guid type?
>
> Python version: 2.7.10
> SQLAlchemy version: 1.0.12
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Default behavior for sqlalchemy_utils.UUIDType

2016-07-29 Thread Jason Libbey
Hi, this is my first post, so if it does not adhere by your guidelines, 
please let me know and I'll fix it ASAP.

I'm using the sqlalchemy_utils.UUIDType as per backend-agnostic-guid-type 

.

from sqlalchemy_utils import UUIDType



class ApplicationStore(Base, Timestamp):

__tablename__ = 'applications'

id = Column(UUIDType(binary=False), primary_key=True, default=uuid.uuid4)

user_uuid = Column(UUIDType(binary=False), unique=True, index=True, 
nullable=False)



Since I am using a mysql db, the UUIDType is correctly falling back to the 
CHAR32 representation.  However, the value that gets written to the db is 
the uuid with no dashes.


application = application_service.create(user_uuid=uuid.uuid4())



The string representation of the uuid shows it with dashes, but the 
database char is saved without dashes.  So my questions are:
1. Am I using this correctly?
2. Is this the expected behavior?
3. How can I force it to save as uuid with dashes, while still using the 
backend agnostic guid type?

Python version: 2.7.10
SQLAlchemy version: 1.0.12

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] session flush inserts performing VERY slow for MS Sql Server

2016-03-07 Thread Jason Brownstein
Thanks Mike - will the full profile ASAP.

On Monday, March 7, 2016, Mike Bayer <clas...@zzzcomputing.com> wrote:

> What will make inserts slow is network as well as indexes.   If you aren't
> comparing your script to some other backend then there's probably a whole
> lot of other things that can get in the way like unicode encoding, large
> columns or other surprises in the model.   If you run the full profile
> version from the FAQ (please refer to the 1.0 documentation... Also you're
> on sqla 1.0 right?) I can identify where the time is spent with a full
> Python profile.
>
> On Mar 7, 2016, at 7:12 PM, jbrownst...@clearcosthealth.com
> <javascript:_e(%7B%7D,'cvml','jbrownst...@clearcosthealth.com');> wrote:
>
> I've run sqlalchemy on MySQL dbs and never had a problem with performance,
> but when I started writing an app that connected to Microsoft SQL Server
> 2014 via pyodbc I ran into a serious performance problem.
> I'm running the app on my laptop and the database server IS in a VPN but
> my internet is fast enough and the vpn connection is reportedly fast.
>
> I decided to run the performance script via the
> http://docs.sqlalchemy.org/en/rel_0_8/faq.html (copied below for
> convenience)
> The FAQ reports being able to insert ***100,000 rows in ~14seconds** *when
> I run the script below I am inserting ***1000 rows in ~100 seconds***
>
> Is there some additional profiling that I can do?
> Is there something wrong with my connection set up?
>
> *OUTPUT:*
> before init
> engine: 0.075476837
> session remove: 0.075752045
> session configure: 0.0770001411438
> drop all: 1.02800011635
> create all: 1.28400015831
> init complete 1.2861335
> flush complete 0.192000150681 // total rows added: [0]
> flush complete 115.70048 // total rows added: [1000]
> flush complete 220.6515 // total rows added: [2000]
>
>
> CODE
> import time
>
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy import Column, Integer, String,  create_engine
> from sqlalchemy.orm import scoped_session, sessionmaker
>
> import urllib
>
> Base = declarative_base()
> DBSession = scoped_session(sessionmaker())
> engine = None
>
> server = "xx.xxx.xx.xx,x"
> database = "test"
>
> raw_connection_string = "DRIVER={SQL Server Native Client
> 11.0};Server=%s;Database=%s;TDS_Version=8.0;Trusted_Connection=yes;" %
> (server, database)
> quoted = urllib.quote_plus(raw_connection_string)
> connection_string = "mssql+pyodbc:///?odbc_connect={}".format(quoted)
>
> ECHO = None # "debug"
>
>
> class Customer(Base):
> __tablename__ = "customer"
> id = Column(Integer, primary_key=True)
> name = Column(String(255))
>
> def init_sqlalchemy(dbname='jason'):
> global engine
> t0 = time.time()
> engine = create_engine(connection_string, echo=ECHO)
> print 'engine: %s' % str(time.time() - t0)
> DBSession.remove()
> print 'session remove: %s' % str(time.time() - t0)
> DBSession.configure(bind=engine, autoflush=False,
> expire_on_commit=False)
> print 'session configure: %s' %  str(time.time() - t0)
> Base.metadata.drop_all(engine)
> print 'drop all: %s' %  str(time.time() - t0)
> Base.metadata.create_all(engine)
> print 'create all: %s' %  str(time.time() - t0)
>
> def test_sqlalchemy_orm(n=10):
> t0 = time.time()
> print 'before init'
> init_sqlalchemy()
> print 'init complete %s' % str(time.time() - t0)
>
> t0 = time.time()
> for i in range(n):
>
> customer = Customer()
> customer.name = 'NAME ' + str(i)
> DBSession.add(customer)
> if i % 1000 == 0:
> DBSession.flush()
> print 'flush complete %s // total rows added: [%s]' %
> (str(time.time() - t0), i)
> DBSession.commit()
> print("SQLAlchemy ORM: Total time for " + str(n) +
> " records " + str(time.time() - t0) + " secs")
>
>
> if __name__ == '__main__':
> test_sqlalchemy_orm(10)
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com
> <javascript:_e(%7B%7D,'cvml','sqlalchemy%2bunsubscr...@googlegroups.com');>
> .
> To post to this group, send email to sqlalchemy@googlegroups.com
> <javascript:_e(%7B%7D,'cvml','sqlalchemy@googlegroups.com');>.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more option

Re: [sqlalchemy] running parallel migrations using sharded/partioned/spaced queries?

2015-10-14 Thread jason kirtland
If you can partition the rows numerically, this is trivially easily to
implement using redis as the orchestrator.

For example if you have integer PKs, you might have a loop like:

offset = 0
while offset < tablesize:
for row in query[offset:batchsize]:
migrate(row)
commit()
offset += batchsize

With redis orchestrating, you use a key in redis and INCRBY to reliably
distribute batches to an arbitrary number of workers on an arbitrary number
of hosts.

   while True:
   offset = redis.incrby('migration-offset', batchsize)
   rows = query[offset:batchsize]
   if not rows:
   break
   for row in rows:
   migrate(row)
   commit()

INCRBY is atomic and returns the adjusted value, so every invocation of
this script that calls into redis and INCRBYs by, say, 1000, has its own
chunk of 1000 to work on. For a starting value of -1000 and four
invocations, you'd see 0, 1000, 2000 and 3000.

I'll typically do this on one invocation, see that it's running well and
that I chose a performant batch size, and then spin up additional workers
on more cores until the migration hits the overall throughput required.



On Wed, Oct 14, 2015 at 9:32 AM, Jonathan Vanasco 
wrote:

> I have to run a script on 2MM objects to update the database.  Not really
> a schema migration, more like changing the internal data representation in
> the fields.
>
> There's a bit of post-processing and bottlenecks involved, so doing
> everything one-at-a-time will take a few days.
>
> I'd like to split this out into 5-10 'task runners' that are each
> responsible for a a section of the database (ie, every 5th record).  That
> should considerably drop the runtime.
>
> I thought I had seen a recipe for this somewhere, but checked and couldn't
> find anything.  That leads me to question if this is a good idea or not.
> Anyone have thoughts/pointers?
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Sending queue messages/emails after model commit

2014-09-22 Thread jason kirtland
Hi Alex,

I have a similar use case, and fixed it by buffering the signals until the
session transaction completes. On rollback, the buffered signals are
discarded; on successful commit, the signals are truly emitted.

Cheers,
Jason


On Mon, Sep 22, 2014 at 2:20 AM, Alex Michael alex...@tictail.com wrote:

 Hey,

 From my understanding it's recommended that the business logic does not
 commit the session and that the application itself handles the session
 lifecycle. Following that, I have all the session handling logic in my
 controllers so the business logic just changes the objects as necessary and
 then the controllers call .commit() when needed. When a model is committed
 and say X property has changed, I need to send a queue message. My problem
 is that I'm not sure where the logic for emitting such signals should live
 in order to avoid duplicating logic all over the place. An example:

 I have an order which I take a payment for. If the payment is successful,
 I mark the order as paid. At this point I need to emit a signal. If the
 order is pending, I wait for a notification to come in from the payment
 gateway and then mark the order as paid. My business logic has a
 `mark_as_paid` function which changes the status of the order. Ideally I
 would like to emit the signal in the `mark_as_paid` method but I don't know
 at that point in time if the session commit will succeed or not. The
 alternative would be to emit the signal manually after the session was
 committed but that would (1) lead to duplicated logic since `mark_as_paid`
 can be triggered from many code paths (2) not always work since the status
 of the order is determined dynamically so the caller doesn't actually know
 what changed in order to emit the correct signal.

 Am I missing something here? I'd appreciate any help.

 Thanks!

 -- alex

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unit tests failing on 0.9.6

2014-07-09 Thread Jason Newton
Yes, it seems to me like it gets to the end and then commits suicide just 
before returning, but I wasn't able to make heads or tails.

Nothing fancy to running it - just unpack the pypi archive, and run python 
setup.py test.  I first ran into it when updating the opensuse package from 
0.8.5 but it's reproducable inside the chroot jail OBS creates as well as 
on my desktop.

On Tuesday, July 8, 2014 3:43:02 PM UTC-7, Michael Bayer wrote:

  how are you running them?

 it isn't reporting any failure, this seems like it's finding a bad test 
 case.   do you have any test.py type stuff it might be hitting?



 On 7/8/14, 5:56 PM, Jason Newton wrote:
  
 Any thoughts on why the unit testing is failing?

 platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 -- 
 /usr/bin/python

 

 == 5776 passed, 
 688 skipped in 139.77 seconds 
 ===
 Traceback (most recent call last):
   File setup.py, line 165, in module
 run_setup(True)
   File setup.py, line 146, in run_setup
 **kwargs
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 972, in run_command
 cmd_obj.run()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line 
 146, in run
 self.with_project_on_sys_path(self.run_tests)
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line 
 127, in with_project_on_sys_path
 func()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line 
 167, in run_tests
 testRunner=self._resolve_as_ep(self.test_runner),
   File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
 self.parseArgs(argv)
   File /usr/lib64/python2.7/unittest/main.py, line 149, in parseArgs
 self.createTests()
   File /usr/lib64/python2.7/unittest/main.py, line 158, in createTests
 self.module)
   File /usr/lib64/python2.7/unittest/loader.py, line 130, in 
 loadTestsFromNames
 suites = [self.loadTestsFromName(name, module) for name in names]
   File /usr/lib64/python2.7/unittest/loader.py, line 122, in 
 loadTestsFromName
 (obj, test))
 TypeError: calling function main at 0x2d3b500 returned 0, not a test


  -Jason
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] unit tests failing on 0.9.6

2014-07-09 Thread Jason Newton
Works for me with doing py.test, I can live with it.


On Wed, Jul 9, 2014 at 1:55 PM, Mike Bayer mike...@zzzcomputing.com wrote:

  OK this is fixed now, I hope.



 On 7/9/14, 9:52 AM, Mike Bayer wrote:

 OK what if you just run it via py.test ?the distutils/setuptools
 approach is so often problematic.

 issue appeared in 0.9.6, not 0.9.5 ?




 On 7/9/14, 3:01 AM, Jason Newton wrote:

 Yes, it seems to me like it gets to the end and then commits suicide just
 before returning, but I wasn't able to make heads or tails.

 Nothing fancy to running it - just unpack the pypi archive, and run python
 setup.py test.  I first ran into it when updating the opensuse package from
 0.8.5 but it's reproducable inside the chroot jail OBS creates as well as
 on my desktop.

 On Tuesday, July 8, 2014 3:43:02 PM UTC-7, Michael Bayer wrote:

  how are you running them?

 it isn't reporting any failure, this seems like it's finding a bad test
 case.   do you have any test.py type stuff it might be hitting?



 On 7/8/14, 5:56 PM, Jason Newton wrote:

 Any thoughts on why the unit testing is failing?

 platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 --
 /usr/bin/python

 

 == 5776 passed,
 688 skipped in 139.77 seconds ==
 =
 Traceback (most recent call last):
   File setup.py, line 165, in module
 run_setup(True)
   File setup.py, line 146, in run_setup
 **kwargs
   File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
 dist.run_commands()
   File /usr/lib64/python2.7/distutils/dist.py, line 953, in
 run_commands
 self.run_command(cmd)
   File /usr/lib64/python2.7/distutils/dist.py, line 972, in run_command
 cmd_obj.run()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 146, in run
 self.with_project_on_sys_path(self.run_tests)
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 127, in with_project_on_sys_path
 func()
   File /usr/lib/python2.7/site-packages/setuptools/command/test.py,
 line 167, in run_tests
 testRunner=self._resolve_as_ep(self.test_runner),
   File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
 self.parseArgs(argv)
   File /usr/lib64/python2.7/unittest/main.py, line 149, in parseArgs
 self.createTests()
   File /usr/lib64/python2.7/unittest/main.py, line 158, in createTests
 self.module)
   File /usr/lib64/python2.7/unittest/loader.py, line 130, in
 loadTestsFromNames
 suites = [self.loadTestsFromName(name, module) for name in names]
   File /usr/lib64/python2.7/unittest/loader.py, line 122, in
 loadTestsFromName
 (obj, test))
 TypeError: calling function main at 0x2d3b500 returned 0, not a test


  -Jason
  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


   --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/-An7vTSq6xo/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] unit tests failing on 0.9.6

2014-07-08 Thread Jason Newton
Any thoughts on why the unit testing is failing?

platform linux2 -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 --
/usr/bin/python



== 5776 passed, 688
skipped in 139.77 seconds
===
Traceback (most recent call last):
  File setup.py, line 165, in module
run_setup(True)
  File setup.py, line 146, in run_setup
**kwargs
  File /usr/lib64/python2.7/distutils/core.py, line 152, in setup
dist.run_commands()
  File /usr/lib64/python2.7/distutils/dist.py, line 953, in run_commands
self.run_command(cmd)
  File /usr/lib64/python2.7/distutils/dist.py, line 972, in run_command
cmd_obj.run()
  File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line
146, in run
self.with_project_on_sys_path(self.run_tests)
  File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line
127, in with_project_on_sys_path
func()
  File /usr/lib/python2.7/site-packages/setuptools/command/test.py, line
167, in run_tests
testRunner=self._resolve_as_ep(self.test_runner),
  File /usr/lib64/python2.7/unittest/main.py, line 94, in __init__
self.parseArgs(argv)
  File /usr/lib64/python2.7/unittest/main.py, line 149, in parseArgs
self.createTests()
  File /usr/lib64/python2.7/unittest/main.py, line 158, in createTests
self.module)
  File /usr/lib64/python2.7/unittest/loader.py, line 130, in
loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
  File /usr/lib64/python2.7/unittest/loader.py, line 122, in
loadTestsFromName
(obj, test))
TypeError: calling function main at 0x2d3b500 returned 0, not a test


-Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] postgresql large objects

2014-06-25 Thread Jason Newton
Hi,

I'm wondering how I might get at postgresql's large object type (lob).  
It's not to be confused with the TOASTED bytea, which are limited currently 
to 1 GiB yet in practice is much lower (for me 400MiB)  - it's a special 
table + api designed to handle very large binary objects, like a few 
hundred MiBs to more recently TiBs. I don't see appropriate definitions 
anywhere and can't find any mention of it really with sqlalchemy. psycopg2 
has support for it and calls it lobject, it provides a file like interface 
to the lob which is a good mapping since with libpgsql you use lo_creat, 
lo_seek, lo_write, lo_read to work with these beasts

I took a look at UserDefinedType but on the bind_processor, this doesn't 
distinguish between inserts and updates.  With inserts, you'd use an oid 
allocated from lo_creat in the transaction.  On updates, you'd use 
lo_trunc/lo_write.  As one more constraint, you must be in a transaction 
before any of these functions are usable.  To reference large objects, as 
they are explicitly an out of table storage, the postgresql specific oid is 
used (which allows garbage collection, referential integrity checks etc).

I'll also mention that other tables reference these large objects via oids, 
something like smart pointers in postgres. 

It'd be great to plug large objects into sqlalchemy properly - but can it 
be done?

-Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] postgresql large objects

2014-06-25 Thread Jason Newton
Hi,

I've replied inline below.


On Wed, Jun 25, 2014 at 6:46 AM, Mike Bayer mike...@zzzcomputing.com
wrote:


  well we just added the OID type in 0.9.5, so you at least have that.


I came across the entry on the issue tracker a little bit after
submitting.  As usual for me, it's support wasn't added very long ago.


 The PG LOB feature is very sparsely documented - on PG's docs, they only
 seem to document the C API (
 www.postgresql.org/docs/9.2/static/lo-interfaces.html), feel free to
 point me to better documentation on this.


There's also server side functions, they operate almost exactly the same as
client side api: http://www.postgresql.org/docs/9.3/static/lo-funcs.html .
There is no better documentation than those two that I know of, but they
were sufficient for me.


 As far as transparently embedding this into INSERT/UPDATE, my first
 thought is that this might be unwise considering how far removed these
 functions are from any kind of mainstream usage in such statements -
 particularly if separate statements have to be called per value to get at
 OIDs or similar.That PG's docs barely mention this whole feature much
 less any kind of regular SQL integration is a red flag.   PG's BYTEA type
 is already arbitrarily large so there is probably not much interest in a
 type like this.If it's the streaming feature you're looking for,
 SQLA's usual approach such as that of Oracle's LOB is to pre-stream it on
 the result set side (necessary, because fetching a batch of rows requires
 it), and cx_Oracle doesn't provide too much option to stream on the write
 side.   I've dealt a lot with streaming datatypes back in the day but
 sometime in the early 2000's everyone just stopped using them.


Seems to fly in the face at the point of SQLA although integration
difficulties are appreciated.  Most advanced postgresql drivers in any
language bindings have added support for this type although none of them
can hide that it's file like.  PG's BYTEA is NOT arbitrarily large, it has
quite real practical limits and I've hit them regularly in storing
compressed HDF5 documents in the database as part of a bench testing
framework.  The theoretical limit is 1GB but this limit is far less in
practice (
http://www.postgresql.org/message-id/cafj8pracfkoinp2uxeizod5krx29n2ofsoldh0w6ej7rxko...@mail.gmail.com
) . http://www.postgresql.org/docs/9.3/static/storage-toast.html  I'm not
looking for streaming, retrieve/set binary buffers.  It'd be nice to
translate it transparently to HDF5 python in-memory objects (ie h5py).



 As the feature involves SQL functions I don't think you'd be dealing only
 with bind_processor(), the SQL functions themselves would probably be via
 SQL-level processing, see
 http://docs.sqlalchemy.org/en/rel_0_9/core/types.html#applying-sql-level-bind-result-processing.
 It's true these functions aren't given access to the compiler context where
 you'd be able to discern INSERT from UPDATE, so I'd probably use two
 variants of datatype, an InsertLOB and UpdateLOB datatype, then I'd apply
 these to an INSERT or UPDATE statement at runtime probably using a
 @compiles decorator -
 http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html, that is, take
 the existing LOB type and just swap it out for InsertLOB/UpdateLOB.
 Adding compiler context to TypeEngine is something we could look into as a
 feature as well so that bind_expression() has this available somehow and
 switching around types wouldn't be needed.


Separate datatypes is clearly not a good approach to this.


 building this would involve stringing together hooks that are not quite
 set up for this, however for me to really assist here I'd need to see exact
 examples of what INSERT, UPDATE and anything else looks like in conjunction
 with these functions.


Everything starts with a transaction block.  Hand crafted inserts use
server side lo_create(-1) (in sql) which allocates a new large object and
returning clause to get the oid in one go back to the client side.  Then I
start using the lobject api on the returned oid.   In patches I've made to
cppdb, in the bind of std::istream I call client side lo_create, lo_open, a
sequence of lo_writes, and finally lo_close.  In retrospect this doesn't
handle updates particularly well and maybe this could be used in sqlalchemy
as is too - an update just allocates a new large object and unreferences
the old one.  This would leave it up to vacuumlo to GC those updated
lobs.  In my work load however, my lobs so far are immutable - they are
results of simulations and this is the way I've worked to date.  It
probably won't stay that way forever.



 I would note that psycopg2 itself also provides for extension types,
 including custom Connection and Cursor subclasses.   If a lot of things
 have to happen when these types are in play it might be something that can
 occur at that level, PG's type API is obviously a lot more PG specific.


-Jason

-- 
You received this message because you are subscribed

Re: [sqlalchemy] postgresql large objects

2014-06-25 Thread Jason Newton
 Just in case this wasn't apparent, you certainly *can* use psycopg2's
 bindings when you're in an otherwise SQLAlchemy app.  Worst case you can
 retrieve a raw psycopg2 connection using connection.raw_connection and do
 whatever you need.  If you truly have some use for LOBs, SQLAlchemy isn't
 preventing you from using it, it's just not providing any niceties around
 it.   The fact that these unusual use cases are not ever prohibited by SQLA
 further raises the bar to adding first class support for them.

 Yes, this much I knew would work, its just clunkier than it could be.





  PG's BYTEA is NOT arbitrarily large, it has quite real practical limits
 and I've hit them regularly in storing compressed HDF5 documents in the
 database as part of a bench testing framework.  The theoretical limit is
 1GB but this limit is far less in practice (
 http://www.postgresql.org/message-id/cafj8pracfkoinp2uxeizod5krx29n2ofsoldh0w6ej7rxko...@mail.gmail.com
 ) . http://www.postgresql.org/docs/9.3/static/storage-toast.html  I'm not
 looking for streaming, retrieve/set binary buffers.  It'd be nice to
 translate it transparently to HDF5 python in-memory objects (ie h5py).

 Python translation is very easy in SQLA, its just if you have special
 needs for SQL syntaxes, that's where special behaviors may be needed.  So
 far it sounds like the only blocking factor is that bind_sql needs to
 distinguish between INSERT and UPDATE.  that's not a terribly tall order
 though it is inconvenient in that the API would need a
 backwards-compatibility layer.


Perhaps you can keep backwards compatibility by introspecting the callbacks
to determine the arity of their arguments.








 As the feature involves SQL functions I don't think you'd be dealing only
 with bind_processor(), the SQL functions themselves would probably be via
 SQL-level processing, see
 http://docs.sqlalchemy.org/en/rel_0_9/core/types.html#applying-sql-level-bind-result-processing.
 It's true these functions aren't given access to the compiler context where
 you'd be able to discern INSERT from UPDATE, so I'd probably use two
 variants of datatype, an InsertLOB and UpdateLOB datatype, then I'd apply
 these to an INSERT or UPDATE statement at runtime probably using a
 @compiles decorator -
 http://docs.sqlalchemy.org/en/rel_0_9/core/compiler.html, that is, take
 the existing LOB type and just swap it out for InsertLOB/UpdateLOB.
 Adding compiler context to TypeEngine is something we could look into as a
 feature as well so that bind_expression() has this available somehow and
 switching around types wouldn't be needed.


  Separate datatypes is clearly not a good approach to this.


 Let me clarify that these separate datatypes would be totally invisible to
 the user.   The user would work with a single LOB type.  Translation to
 Insert/Update versions would not be explicit and would occur at the point
 at which the insert/update construct is compiled.   At the moment this may
 be the best approach short of modifying the library (but then again I can't
 say much about the approach because i have little idea what the SQL we're
 talking about looks like).


Ah.  I thought it would be done in the declarative table descriptions.
I'm not sure, being relatively new to SQLA, that would look like to do.




   Everything starts with a transaction block.

 the psycopg2 DBAPI is implicitly in a transaction block at all times
 unless autocommit mode is set up, so there shouldn't be any problem here.

 AFAIK it won't work with autocommit but I don't have documentation to
prove that, I just remember getting errors in I think JDBC pertaining to it.



Hand crafted inserts use server side lo_create(-1) (in sql) which
 allocates a new large object and returning clause to get the oid in one go
 back to the client side.  Then I start using the lobject api on the
 returned oid.

 I'd like to see explicit SQL, preferably in the form of a psycopg2 script
 that illustrates all the operations you wish to support and specifically
 how they must interact with the database.


I've attached example usages.  As I indicated in my prior email, right now
I only do inserts/selects.



In patches I've made to cppdb, in the bind of std::istream I call
 client side lo_create, lo_open, a sequence of lo_writes, and finally
 lo_close.  In retrospect this doesn't handle updates particularly well and
 maybe this could be used in sqlalchemy as is too - an update just allocates
 a new large object and unreferences the old one.

 I wonder why the mechanics of these functions can't be more cleanly
 wrapped into server-side stored procedures?If they can be distilled
 into simple insert_lob() update_lob() functions that would potentially be a
 better separation of duties.


I think it's because only small lobs could rely on that -  huge lobs that
exceed working memory fit better with a file like api.  It would be a nice
convenience to have, perhaps.  These same small lobs (ie lobs that fit in

[sqlalchemy] Any progress on friend relationships, aka bi-directional self-referencing many-to-many

2014-05-29 Thread Jason Wirth
I've been learning SQLAlchemy and wanted to model of Users where they can 
add friends and any additions populate across all users. I pulled my hair 
out yesterday because I could get it to work in one direction, but not 
bi-directionally. This seems like such a simple thing (it's easily done in 
the Django ORM). 

The use would be something like this:

jobs = Person(name='Steve Jobs') 
woz = Person(name='Steve Wozniak') 
ive = Person(name='Johnny Ive') 

jobs.friends = [woz, ive]

 jobs 
Out[1]: Person (Steve Jobs) 

 jobs.friends 
Out[2]: [Person (Steve Wozniak), Person (Johnny Ive)]

 woz.friends 
Out[3]: [Person (Steve Jobs)]



I found Mike's answer to this problem: 

- http://www.mail-archive.com/sqlalchemy%40googlegroups.com/msg26262.html
- 
http://stackoverflow.com/questions/9116924/how-can-i-achieve-a-self-referencing-many-to-many-relationship-on-the-sqlalchemy


These posts are from 2012. Has there been any development on these kinds of 
relationships or are Mike's answers still the best known solutions?

Best,
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] matches_any: an extension to the Query object, and a HSTORE attribute access property

2014-01-07 Thread jason kirtland
On Tue, Jan 7, 2014 at 11:14 AM, Philip Scott safetyfirstp...@gmail.comwrote:

 Hi folks,

 SQLAlchemy is at the heart of what we do at work (UK based hedge fund); so
 first of all a big thank you; I am not quite sure where we'd be without
 you. We would like to give back to the community as much as we can (I tried
 to get some of the developers on the company xmas present list this year
 but was too late.. cross your fingers for next year).

 We have extended SQLAlchemy in a few places, though it is quite
 intermingled with our domain specific stuff I keep an eye out for little
 snippets that might be useful to others. So here's a trivial one; take it
 or leave it (and feel free to think of a better name). Knowing my luck it
 already exists; though I have looked hard through the docs!

 class QueryEnhanced(Query):
 ''' Add a few extra bells and whistles to the standard Query object '''
 def matches_any(self):
 ''' Returns true if your query would return 1 or more rows; false
 otherwise.
 The following two statements ask the same question; but
 matches_any is _much_ quicker on large tables:
 my_query.matches_any()
 my_query.count() != 0
 '''
 return self.session.scalar(select([self.exists()]))

 The other bit of technology we have that could be unpicked without _too_
 much trouble is a sort of reverse CompositeProperty; many attributes of
 different types, including collections, out of one HSTORE column (with a
 sort of side-loaded instrumentation for mutation tracking that I think
 could have been done in a more idiosyncratic way).

 Paraphrasing a bit but you can do things like:

 class Animal(Base):
 data   = Column(MutableDict.as_mutable(HSTORE))

 colour = HsProperty(data, String)
 legs   = HsProperty(data, Integer)
 discovered = HsProperty(data, Date)
 fun_facts  = HsProperty(data, JSONEncoded(list))

 'colour', 'legs', 'discovered', and 'fun_facts' end up as keys in the
 HSTORE and the values are strings, integers, dates and lists on the python
 side but stored as strings in the HSTORE such a way that they can be
 CAST-ed in a server query [where possible]:

 session().query(Animal).filter(Animal.legs  2)

 and get a query like

 SELECT ... FROM animal WHERE CAST(animal.data - legs AS INTEGER)  2

 You can also put an arbitrary JSONEncodable object in there too.
 Collections get converted to Mutable counterparts for change-tracking.

 In many ways it is similar to ColumnProperty except that - the properties
 are writable (and when written only trigger the relevant bits of the hstore
 to be updated). Also on object instances the values in HsProperties are
 fetched as part of the query; we lazily de-serialise them directly from the
 hstore dictionary.

 Before spend a couple of days removing our corporate clutter from that,
 getting permission to license it etc.. and posting either as a patch or
 extension I thought I would see if there is any interest (or if someone has
 already done it better?). It's implemented as a custom metaclass right now,
 but I think I might be able to do it fully with events.


I would be very interested in this work. At my org we have a subset of the
same idea that we're depending upon, but it's tied to an ancient SQLAlchemy
version and we never took it all the way into the query space like that.
That looks absolutely fabulous!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] testing for an association proxy (possible bug and patch included)

2013-08-23 Thread jason kirtland
On Fri, Aug 23, 2013 at 2:31 PM, Gombas, Gabor (IT) 
gabor.gom...@morganstanley.com wrote:

 On Fri, Aug 23, 2013 at 12:11:39PM -0700, Jonathan Vanasco wrote:

  i think a simple fix could be something like this ( line 240,
 sqlalchemy/ext/associationproxy.py
  )
 
  if self.scalar:
  -if not getattr(obj, self.target_collection)
  -return self._scalar_get(getattr(obj,
 self.target_collection))
  else:
 
  if self.scalar:
  +proxied = getattr(obj, self.target_collection)
  +if not proxied :
  +return None
  +return self._scalar_get(proxied)
else:

 We're monkey-patching AssociationProxy.__get__ with the same change
 since SQLA 0.5.x, so it would be nice to get it applied upstream...
 Maybe in 0.9?


The patch seems like surprising Python behavior to me. Traversing across a
None is almost certainly a bug in regular code, and quashing that error by
default feels dangerous. I would want this to raise by default (and I have
found bugs because it did.)  I think you could opt into this behavior by
supplying an alternate, custom getter function that quashed None when
creating the proxy.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Cross-schema foreign keys reflection

2013-08-14 Thread Jason


On Tuesday, August 13, 2013 7:43:54 PM UTC-4, Michael Bayer wrote:

 cross-schema reflection is supported on PG but has caveats, see 
 http://docs.sqlalchemy.org/en/rel_0_8/dialects/postgresql.html#remote-cross-schema-table-introspectionfor
  a discussion of recommended usage patterns.




Thanks for pointing out that page. I also needed to add the schema to the 
foreign key declarations.
 eg Column(Integer, ForeignKey('otherschema.othertable.id')) instead of 
Column(Integer, ForeignKey('othertable.id'))

-- Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Latest sphinx with autodoc may fail with SQLAlchemy models

2013-04-11 Thread Jason
Just a heads up that it looks like Sphinx 1.2b1 (the latest release on 
PyPi) may not work when using the autodoc plugin on SQLAlchemy models. I've 
created a ticket 
at https://bitbucket.org/birkenfeld/sphinx/issue/1143/breaks-on-some-attributes 
for Sphinx, but it also seems weird that the RelationshipProperty can't 
handle having repr() being called on it.

--
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Advice: Best practice for working with an existing database

2013-03-22 Thread Jason


On Monday, March 18, 2013 5:25:36 PM UTC-4, Peter Herndon wrote:

 Hi all,

 I'm new to SQLAlchemy, and looking for some advice on how to approach 
 working with an existing database. The database will not be managed by 
 Python, and I will need to maintain whatever library I write to keep up 
 with the occasional schema change. I am looking to write a more-or-less 
 definitive database access layer for myself and others to use if and when 
 we write applications in Python -- the company is not primarily a Python 
 shop. 

 What's my best approach for building this library? Should I use 
 Declarative, or should I use classical mapping -- which would be better, 
 easier to maintain, easier for others to use? Should I be writing out the 
 classes, or should I rely on reflection? Speaking of reflection, I've been 
 using it to analyze what SQLAlchemy thinks of the schema. For a given table 
 created by reflection, I'm seeing _autoincrement_column, columns, 
 foreign_keys, indexes, and primary_key -- are there other table attributes 
 I should be inspecting?

 Thanks for your advice!

 ---Peter


Hi Peter,
I think using a the declarative reflected style is a great way to integrate 
an existing database into your application. It sounds like you are doing 
this already, but in case you are not the reflection setup is documented at 
http://docs.sqlalchemy.org/en/rel_0_8/orm/extensions/declarative.html#using-reflection-with-declarative.
 
It's very easy to have it populate all of your models with the correct 
table attributes using the DeferredReflection base class. Then you just 
have to make (almost empty) class definitions for each model. In theory you 
could even make it so that the table name is inferred from the name of the 
Model class, but in practice I found this to be annoying (I would 
constantly forget how the table names compared to the class names).

--
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] 0.8.0b Postgres Array Issue

2013-03-07 Thread Jason
Hello,

I'm using a column defined as:

discounts = Column(ARRAY(Discount))

Where Discount is a UserDefinedType that just passes the value through 
to/from Psycopg2 (which uses a namedtuple for the discounts value):

class Discount(UserDefinedType):
 SQLAlchemy type that passes through values to be handled by a 
psycopg2
extension type.

type_name = 'Discount'

def get_col_spec(self):
return self.type_name

def bind_processor(self, dialect):
return None

def result_processor(self, dialect, coltype):
return None

def is_mutable(self):
return True

def copy_value(self, value):
return copy.copy(value)


Previously (in 0.7.9) the discounts value on a model instance would be an 
list of discount types (which are namedtuples). Now SQLAlchemy seems to 
convert the Discount type into a list resulting in a two-dimensional list 
for the discounts value instead of a list of namedtuples. The documentation 
doesn't mention this side-effect of the array improvements, is this a bug 
in 0.8.0b? 

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] 0.8.0b Postgres Array Issue

2013-03-07 Thread Jason


On Thursday, March 7, 2013 1:40:46 PM UTC-5, Michael Bayer wrote:

 Can you pass along more specifics here?  I don't see where this named 
 tuple is being created.  It's true that the ARRAY type by default doesn't 
 know how deep it should be unwrapping arrays, if you pass the dimensions 
 argument then it will be fixed.   But it's not clear why you didn't see 
 this issue in 0.7 as well, perhaps we check for a wider range of 
 array/tuple types at this point but I don't recall.



 A namedtuple is returned by Psycopg2 directly (using its 
register_composite mechanism). The result processing changed in this 
commit: 
https://bitbucket.org/sqlalchemy/sqlalchemy/commits/0ba8b4fd402f1c187b80310fc838dcdea27e0af8,
 
so I will take a look to see if I can figure out what causes the new 
behaviour. I'm guessing that it sees the namedtuple instance and figures 
that it is a tuple instance so it can be converted into a list.


--
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Postgres custom composite types

2013-03-05 Thread Jason


On Tuesday, March 5, 2013 11:09:00 AM UTC-5, Michael Bayer wrote:


 On Mar 5, 2013, at 9:45 AM, Jason ja...@deadtreepages.com javascript: 
 wrote: 

  Hello, 
  
  I'm looking or some direction on how to implement the SQL syntax when 
 querying Postgres' composite types. In case you're unfamiliar: the 
 composite types in Postgres are a type that contains attributes which are 
 regular SQL types. The attribute to type relationship is similar to the 
 column to table relationship (except there are no constraints on 
 attributes). A table column can be this composite type. Psycopg2 implements 
 this by instantiating a namedtuple on retrieval or accepting a namedtuple 
 on update/insert (you can also use your own custom class). The saving and 
 retrieval isn't a problem with SQLAlchemy, I just pass through the 
 namedtuple to/from Psycopg2. 
  
  What I need to implement is the SQL syntax for querying composite type 
 attributes. Postgres allows this by using the syntax 
 (table_name.column_name).attribute_name = 'bleh' or just 
 (column_name).attribute_name = 'bleh' when a table identifier is not 
 required. I'm not sure how to go about this because the sql generation 
 needs to change the way the column name is output, would this require 
 subclassing the ColumnClause? I think I just need to know where to override 
 the behaviour of generating the qualified column name in statements. 

 you'd be doing ColumnElement here which is the more fundamental object.   
  It's discussed here in subclassing guidelines: 
 http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html#subclassing-guidelines
  

 if you're on 0.8 the integration here can be very smooth, using custom 
 operators (
 http://docs.sqlalchemy.org/en/rel_0_8/core/types.html#types-operators) 
 you could have an approach that looks like: 
  mytable.c.column.attrs.attribute_name == value.   The attrs namespace 
 would call into your custom ColumnElement. 

 Since the custom operator API is a total blast to use here's a proof of 
 concept: 




Wow thats great, thanks!

There is a problem with this when using declarative and a reflected base. 
My Column's type get's overrwritten with NullType even though I've set 
autoreload_replace=False when constructing the table. I am still using the 
example ReflectedBase that was in the 0.7 documentation (because the new 
DeferredReflection class in 0.8 raises a NoReferencedTableError when 
configure_mappers() is called). Any idea how this reflected base is 
overwriting the type?:

class DeclarativeReflectedBase(object):

_mapper_args = []

@classmethod
def __mapper_cls__(cls, *args, **kw):

Declarative will use this function in lieu of
calling mapper() directly.

Collect each series of arguments and invoke
them when prepare() is called.

cls._mapper_args.append((args, kw))

@classmethod
def prepare(cls, engine):
Reflect all the tables and map !
for args, kw in cls._mapper_args:
class_ = args[0]
print class_.__tablename__
class_.__table__ = table = Table(
class_.__tablename__,
cls.metadata,
extend_existing=True,
autoload_replace=False,
autoload=True,
autoload_with=engine)
class_.__mapper__ = mapper(class_, table, **kw)


Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Postgres custom composite types

2013-03-05 Thread Jason


On Tuesday, March 5, 2013 2:52:59 PM UTC-5, Jason wrote:



 On Tuesday, March 5, 2013 11:09:00 AM UTC-5, Michael Bayer wrote:


 On Mar 5, 2013, at 9:45 AM, Jason ja...@deadtreepages.com wrote: 

  Hello, 
  
  I'm looking or some direction on how to implement the SQL syntax when 
 querying Postgres' composite types. In case you're unfamiliar: the 
 composite types in Postgres are a type that contains attributes which are 
 regular SQL types. The attribute to type relationship is similar to the 
 column to table relationship (except there are no constraints on 
 attributes). A table column can be this composite type. Psycopg2 implements 
 this by instantiating a namedtuple on retrieval or accepting a namedtuple 
 on update/insert (you can also use your own custom class). The saving and 
 retrieval isn't a problem with SQLAlchemy, I just pass through the 
 namedtuple to/from Psycopg2. 
  
  What I need to implement is the SQL syntax for querying composite type 
 attributes. Postgres allows this by using the syntax 
 (table_name.column_name).attribute_name = 'bleh' or just 
 (column_name).attribute_name = 'bleh' when a table identifier is not 
 required. I'm not sure how to go about this because the sql generation 
 needs to change the way the column name is output, would this require 
 subclassing the ColumnClause? I think I just need to know where to override 
 the behaviour of generating the qualified column name in statements. 

 you'd be doing ColumnElement here which is the more fundamental object.   
  It's discussed here in subclassing guidelines: 
 http://docs.sqlalchemy.org/en/rel_0_8/core/compiler.html#subclassing-guidelines
  

 if you're on 0.8 the integration here can be very smooth, using custom 
 operators (
 http://docs.sqlalchemy.org/en/rel_0_8/core/types.html#types-operators) 
 you could have an approach that looks like: 
  mytable.c.column.attrs.attribute_name == value.   The attrs namespace 
 would call into your custom ColumnElement. 

 Since the custom operator API is a total blast to use here's a proof of 
 concept: 




 Wow thats great, thanks!

 There is a problem with this when using declarative and a reflected base. 
 My Column's type get's overrwritten with NullType even though I've set 
 autoreload_replace=False when constructing the table. I am still using the 
 example ReflectedBase that was in the 0.7 documentation (because the new 
 DeferredReflection class in 0.8 raises a NoReferencedTableError when 
 configure_mappers() is called). Any idea how this reflected base is 
 overwriting the type?:


I figured out the problem. It's not finding the table in the metadata 
(because it's schema qualified in metadata.tables) and so it creates it 
anew. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: max() min() string lengths?

2013-01-04 Thread Jason


On Thursday, January 3, 2013 10:08:19 PM UTC-5, ocicat wrote:

 Embarrassingly, I'm gotten lost in calling SQL functions in SQLAlchemy 
 0.7.1.

 I can boil the problem down to the following table structure:

 CREATE TABLE words (
 id INTEGER NOT NULL, 
 timestamp DATETIME NOT NULL, 
 word TEXT NOT NULL, 
 PRIMARY KEY (id), 
 UNIQUE (word)
 );

 ...where I would like to find the maximum  minimum stored string lengths. 
 eg.

 SELECT MAX(LENGTH(word), MAX(LENGTH(word)) FROM words;

 The code below constructs  populates the table correctly, but translating 
 the above SQL into something more Pythonic is eluding me.  Any suggestions 
 would be welcomed, as I'm in a rut.

 Thanks.

 #8

 #!/usr/bin/env python

 from datetime import datetime

 from sqlalchemy import create_engine, Column, func
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.dialects.sqlite import INTEGER, DATETIME, TEXT

 Base = declarative_base()

 def get_dbname():
 return 'test.db'

 class Word(Base):
 __tablename__ = 'words'

 id = Column(INTEGER, primary_key=True)
 timestamp = Column(DATETIME, nullable=False, default=datetime.now())
 word = Column(TEXT, nullable=False, unique=True)

 def __init__(self, word):
 self.word = word

 def __repr__(self):
 return 'Word(%d, %s, %s)' % (self.id, self.timestamp, 
 self.word)

 if __name__ == '__main__':
 engine = create_engine('sqlite:///' + get_dbname(), echo=True)
 Base.metadata.create_all(engine) 
 Session = sessionmaker(bind=engine)
 session = Session()

 words = THE OF AND TO A IN THAT IS WAS HE FOR IT WITH AS HIS ON BE 
 AT
 SAME ANOTHER KNOW WHILE LAST.split()

 for w in words: 
 session.add(Word(w))
 session.commit()

 print 'total words = %d' % session.query(Word).count()

 # minimum length = ?
 # maximum length = ?


Would something like this work:

from sqlalchemy import func
session.query(func.max(func.length(Word.word)), 
func.min(func.length(Word.word))) 


--
Jason
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/eZI3kq2cjUMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] flush overflow counter debug

2012-12-04 Thread Jason
After upgrading to SQLAlchemy 0.7.9 I know receive an error  FlushError: 
Over 100 subsequent flushes have occurred within session.commit() - is an 
after_flush() hook creating new objects? which is was introduced by 
http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821
 
in the changelog. 

I don't have any after_flush event handlers.  I do have a before_flush 
event handler that changes the state of a related object, but that doesn't 
sound like what the error is talking about.

How can I debug this further? I am doing this within a Pyramid application, 
so I am somewhat removed from the commit logic.

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/3fP5zscycEQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] flush overflow counter debug

2012-12-04 Thread Jason
Does this mean there is a limit to the number of queries I can run in a 
transaction? 

For example I am looping about 20 times. For each loop I insert one or two 
rows and do at least one query. There might be some more implicit queries 
when accessing relationship properties. If I set Session.autoflush to false 
before the loop and then set it back to true afterwards it works.

On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:


 On Dec 4, 2012, at 3:04 PM, Jason wrote:

 After upgrading to SQLAlchemy 0.7.9 I know receive an error  FlushError: 
 Over 100 subsequent flushes have occurred within session.commit() - is an 
 after_flush() hook creating new objects? which is was introduced by 
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821in
  the changelog. 

 I don't have any after_flush event handlers.  I do have a before_flush 
 event handler that changes the state of a related object, but that doesn't 
 sound like what the error is talking about.

 How can I debug this further? I am doing this within a Pyramid 
 application, so I am somewhat removed from the commit logic.


 this error traps the condition that dirty state remains in the Session 
 after a flush has completed.This is possible if an after_flush hook has 
 added new state, or perhaps also if a mapper.after_update/after_insert etc 
 hook, or even a before_update/before_insert has modified the flush plan, 
 which is not appropriate in any case.

 the best way is to actually create an after_flush() hook with a 
 pdb.set_trace() in it; in there, you'd just look at session.new, 
 session.dirty, and session.deleted to ensure that they are empty.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vKM3a0cwCY8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] flush overflow counter debug

2012-12-04 Thread Jason
Disregard that, I spoke too soon. There is something going on after it 
starts the commit process.

On Tuesday, December 4, 2012 3:35:40 PM UTC-5, Jason wrote:

 Does this mean there is a limit to the number of queries I can run in a 
 transaction? 

 For example I am looping about 20 times. For each loop I insert one or two 
 rows and do at least one query. There might be some more implicit queries 
 when accessing relationship properties. If I set Session.autoflush to false 
 before the loop and then set it back to true afterwards it works.

 On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:


 On Dec 4, 2012, at 3:04 PM, Jason wrote:

 After upgrading to SQLAlchemy 0.7.9 I know receive an error  FlushError: 
 Over 100 subsequent flushes have occurred within session.commit() - is an 
 after_flush() hook creating new objects? which is was introduced by 
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821in
  the changelog. 

 I don't have any after_flush event handlers.  I do have a before_flush 
 event handler that changes the state of a related object, but that doesn't 
 sound like what the error is talking about.

 How can I debug this further? I am doing this within a Pyramid 
 application, so I am somewhat removed from the commit logic.


 this error traps the condition that dirty state remains in the Session 
 after a flush has completed.This is possible if an after_flush hook has 
 added new state, or perhaps also if a mapper.after_update/after_insert etc 
 hook, or even a before_update/before_insert has modified the flush plan, 
 which is not appropriate in any case.

 the best way is to actually create an after_flush() hook with a 
 pdb.set_trace() in it; in there, you'd just look at session.new, 
 session.dirty, and session.deleted to ensure that they are empty.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/bMT7ZGGadP8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] flush overflow counter debug

2012-12-04 Thread Jason
Ok I figured out the cause, but not the solution. I am using a mutable type 
for hstore columns. I have a UserDefinedType for Hstore that just passes 
everything through to psycopg2's hstore type:

class HStore(UserDefinedType):
 SQLAlchemy type that passes through values to be handled by a 
psycopg2 
extension type. 

type_name = 'HSTORE'

def get_col_spec(self):
return self.type_name

def bind_processor(self, dialect):
return None

def result_processor(self, dialect, coltype):
return None

def is_mutable(self):
return True

def copy_value(self, value):
return copy.copy(value)


I used the MutationDict 
from http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/mutable.html :

class MutationDict(Mutable, dict):

@classmethod
def coerce(cls, key, value):
Convert plain dictionaries to MutationDict.

if not isinstance(value, MutationDict):
if isinstance(value, dict):
return MutationDict(value)

# this call will raise ValueError
return Mutable.coerce(key, value)
else:
return value

def __setitem__(self, key, value):
Detect dictionary set events and emit change events.

dict.__setitem__(self, key, value)
self.changed()

def __delitem__(self, key):
Detect dictionary del events and emit change events.

dict.__delitem__(self, key)
self.changed()


The column definition I use is:

some_attrs = Column(MutationDict.as_mutable(HStore))

Then in the actual transaction I am copying one object with that column 
definition to another object with the same definition:

newobject.some_attrs = other_object.some_attrs


If I comment out that line there is only a single flush at the commit time. 
 

It looks correct according to the examples I have seen, but if you know why 
it keeps setting them as dirty please let me know.

On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:


 On Dec 4, 2012, at 3:04 PM, Jason wrote:

 After upgrading to SQLAlchemy 0.7.9 I know receive an error  FlushError: 
 Over 100 subsequent flushes have occurred within session.commit() - is an 
 after_flush() hook creating new objects? which is was introduced by 
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821in
  the changelog. 

 I don't have any after_flush event handlers.  I do have a before_flush 
 event handler that changes the state of a related object, but that doesn't 
 sound like what the error is talking about.

 How can I debug this further? I am doing this within a Pyramid 
 application, so I am somewhat removed from the commit logic.


 this error traps the condition that dirty state remains in the Session 
 after a flush has completed.This is possible if an after_flush hook has 
 added new state, or perhaps also if a mapper.after_update/after_insert etc 
 hook, or even a before_update/before_insert has modified the flush plan, 
 which is not appropriate in any case.

 the best way is to actually create an after_flush() hook with a 
 pdb.set_trace() in it; in there, you'd just look at session.new, 
 session.dirty, and session.deleted to ensure that they are empty.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/_yUDl4TxbdoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] flush overflow counter debug

2012-12-04 Thread Jason


On Tuesday, December 4, 2012 4:15:03 PM UTC-5, Jason wrote:

 Ok I figured out the cause, but not the solution. I am using a mutable 
 type for hstore columns. I have a UserDefinedType for Hstore that just 
 passes everything through to psycopg2's hstore type:

 class HStore(UserDefinedType):
  SQLAlchemy type that passes through values to be handled by a 
 psycopg2 
 extension type. 
 
 type_name = 'HSTORE'
 
 def get_col_spec(self):
 return self.type_name
 
 def bind_processor(self, dialect):
 return None
 
 def result_processor(self, dialect, coltype):
 return None
 
 def is_mutable(self):
 return True
 
 def copy_value(self, value):
 return copy.copy(value)


 I used the MutationDict from 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/extensions/mutable.html :

 class MutationDict(Mutable, dict):
 
 @classmethod
 def coerce(cls, key, value):
 Convert plain dictionaries to MutationDict.

 if not isinstance(value, MutationDict):
 if isinstance(value, dict):
 return MutationDict(value)

 # this call will raise ValueError
 return Mutable.coerce(key, value)
 else:
 return value

 def __setitem__(self, key, value):
 Detect dictionary set events and emit change events.

 dict.__setitem__(self, key, value)
 self.changed()

 def __delitem__(self, key):
 Detect dictionary del events and emit change events.

 dict.__delitem__(self, key)
 self.changed()


 The column definition I use is:

 some_attrs = Column(MutationDict.as_mutable(HStore))

 Then in the actual transaction I am copying one object with that column 
 definition to another object with the same definition:

 newobject.some_attrs = other_object.some_attrs


 If I comment out that line there is only a single flush at the commit 
 time.  

 It looks correct according to the examples I have seen, but if you know 
 why it keeps setting them as dirty please let me know.

 On Tuesday, December 4, 2012 3:16:24 PM UTC-5, Michael Bayer wrote:


 On Dec 4, 2012, at 3:04 PM, Jason wrote:

 After upgrading to SQLAlchemy 0.7.9 I know receive an error  FlushError: 
 Over 100 subsequent flushes have occurred within session.commit() - is an 
 after_flush() hook creating new objects? which is was introduced by 
 http://docs.sqlalchemy.org/en/latest/changelog/changelog_07.html#change-75a53327aac5791fe98ec087706a2821in
  the changelog. 

 I don't have any after_flush event handlers.  I do have a before_flush 
 event handler that changes the state of a related object, but that doesn't 
 sound like what the error is talking about.

 How can I debug this further? I am doing this within a Pyramid 
 application, so I am somewhat removed from the commit logic.


 this error traps the condition that dirty state remains in the Session 
 after a flush has completed.This is possible if an after_flush hook has 
 added new state, or perhaps also if a mapper.after_update/after_insert etc 
 hook, or even a before_update/before_insert has modified the flush plan, 
 which is not appropriate in any case.

 the best way is to actually create an after_flush() hook with a 
 pdb.set_trace() in it; in there, you'd just look at session.new, 
 session.dirty, and session.deleted to ensure that they are empty.


Looks like copy.copy will call __setitem__ on dict to be copied resulting 
in it being marked dirty. I had to change the copy code to:

def copy_value(self, value):
try:
# Use dict.copy(), because copy.copy will call __setitem__ on 
the
# value causing it to be marked dirty, which could result in
# an infinite loop of flushing dirty copies if an hstore is
# copied to another hstore column. 
return value.copy()
except Exception:
return None 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/q7YGmmDyhxsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] determine if relationship is a backref

2012-08-15 Thread Jason
Is there a way to determine if a relationship property on a model class is 
a backref?

For instance I have the model:

class Department(Base):
department_id = Column(Int(), primary_key=True)

class Employee(Base):
employee_id = Column(Int(), primary_key=True)
department_id = Column(Int(), ForeignKey(Department.deptartment_id), 
nullable=False)
department = relationship(Department, backref='employees')

I'm looping through the attributes of each table and for each attribute 
that has a relationship property I want to determine whether the 
relationship constructor was called on the current table or not. So for 
Department.employees it would be False because the relationship was defined 
as a backref from another table. Is there a method/technique to accomplish 
this?


Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/d7to24Qzxe4J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] determine if relationship is a backref

2012-08-15 Thread Jason


On Wednesday, August 15, 2012 11:41:20 AM UTC-4, Michael Bayer wrote:




 in this case it seems like you aren't as much concerned about the actual 
 mapper configuration as you are about codepaths being invoked.   So we 
 instrument functions to track when they are called. 

 You'd make a wrapper around relationship() yourself, which would be used 
 by userland code.  This allows userland relationship() calls to be 
 distinguished from those that SQLAlchemy calls itself internally: 

 from sqlalchemy.orm import relationship as _relationship 
 relationships_called = set() 
 def relationship(*arg, **kw): 
 rel = _relationship(*arg, **kw) 
 relationships_called.add(rel) 
 return rel 

 def was_relationship_called(rel): 
 return rel in relationships_called 


 That's so simple I didn't think of it. I may switch from what I ended up 
doing. I check if the relationship column with the foreign key is on the 
same table as this relationship property. This works because by convention 
I put the relationship on the same table as the FK column and I am working 
on a CRUD that automatically adds relationships to forms only if they can 
be displayed as a simple select list (a relationship that did not have an 
FK on it's model class would need a more advanced interface).

# prop is a RelationshipProperty
for pair in prop.local_remote_pairs:
for el in pair:
for fk in el.foreign_keys:
if fk.parent.table == MyModel.__table__:
local = True 

Thanks for the help,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/5GZlYjt8TQQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] How do I do this SQL query in SQLAlchemy?

2012-07-02 Thread Jason Phillips
I have a MySQL database with the following structure and was hoping someone 
could help me convert the SQL query below to SQLAlchemy.

*Database structure:*

*bottom:*
id
name
middle_id

*middle:*
id
name
top_id

*top:*
id
name

*Here are my models:*

class Bottom(db.Model):
id= db.Column(db.Integer, primary_key=True)
name  = db.Column(db.String(64))
middle_id = db.Column(db.Integer, db.ForeignKey('middle.id'))
middle= db.relationship('Middle',
backref=db.backref('bottoms', lazy='dynamic'))

class Middle(db.Model):
id= db.Column(db.Integer, primary_key=True)
name  = db.Column(db.String(64))
top_id= db.Column(db.Integer, db.ForeignKey('top.id'))
top   = db.relationship('Top',
backref=db.backref('middles', lazy='dynamic'))

class Top(db.Model):
id= db.Column(db.Integer, primary_key=True)
name  = db.Column(db.String(64))

*Here's the SQL I want to convert to SQLAlchemy:*

SELECT
  b.*,
  m.*,
  t.*
FROM bottom AS b
  LEFT JOIN (SELECT id, name, top_id  from middle) AS m  on m.id = 
b.middle_id
  LEFT JOIN (SELECT id, name FROM top) AS t   on t.id = m.top_id

Thank you in advance :).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/JpDLJzBXBzUJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Arrays with altered elements

2011-08-04 Thread Jason
I would like to use an array comparison in a query, but with each array 
element being the result of a function. I do this by making the array with:
terms = [func.dmetaphone(t) for t in terms.split()]

When I use this array in a comparison I get an error can't adapt type 
'Function' because it is passing [sqlalchemy.sql.expression.Function at 
0x1057b8310; dmetaphone] as the array.

My full query looks like:

terms = [func.dmetaphone(t) for t in terms.split()] 
metaphones = 
Session.query(func.dmetaphone(func.unnest(func.string_to_array(Customer.name, 
' '))).label('mphone'), Customer.id).subquery() 
customers = Session.query(metaphones.columns.id) \  
   .group_by(metaphones.columns.id) \  
   
.having(func.array_agg(metaphones.columns.mphone).op('@')(terms))

From all the documentation I have read it this should work, but SQLAlchemy 
is not evaluating the func.dmetaphone call within the array. Is there a way 
to force it to do that?


Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Rv742SjyArEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Arrays with altered elements

2011-08-04 Thread Jason
I suppose then the simplest solution is to make a function in the database 
that will execute a function on each element of an array and use:
.having(func.array_agg(metaphones.columns.mphone).op('@')(func.metaphone_array(terms)))

This seems to work fine.

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/5Bh6iSulbSgJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Full-fledged objects as mapped attributes?

2011-07-01 Thread Jason Denning
Awesome!  That is exactly what I needed - thanks very much!


On Jun 30, 8:06 am, Michael Bayer mike...@zzzcomputing.com wrote:
 your question has arrived at the same time almost the exact same question is 
 coming from another user Russ, so I've added an example of how to use 
 attribute events in conjunction with a TypeDecorator, so that the data is 
 coerced both at the database level, as well as at the attribute setter level 
 - this is 
 athttp://www.sqlalchemy.org/trac/wiki/UsageRecipes/ValidateAllOccurrenc

 On Jun 29, 2011, at 9:40 PM, Jason Denning wrote:

  Hi All,
  I am building a Pyramid app using SQLAlchemy for the model, and I
  would like to be able to use the attributes as full-fledged objects
  (i.e., I would like to be able to define methods attached to the
  attributes), ideal example usage (although this is a somewhat
  contrived example):

  class Phone(Base):
     id = Column(Integer, primary_key=True)
     phone = Column(MyPhoneType)

  p1 = Phone(phone=9995551212)
  session.add(p1)
  p1.phone
  '9995551212'
  p1.phone.prettyPrint()
  (999) 555-1212
  p1.phone.foo()
  Now doing foo...

  I have made some attempts towards this end using TypeDecorator and by
  trying to extend UserDefinedType, but it seems that the instantiated
  objects always have their mapped attributes converted into regular
  python types (string, unicode, etc..) losing whatever methods/class
  level stuff I try to define.

  I am not trying to affect any aspect of the descriptor protocol,
  object management, database interaction or any other ORM type stuff -
  I just want some helper methods, and maybe some instance-level
  attributes to be available for certain types of mapped attributes.

  Is this possible?  Should I be doing this using TypeDecorator /
  UserDefinedType / other ?  Is anyone else attempting this type of
  thing, or am I totally crazy?

  Thanks,
  Jason

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Full-fledged objects as mapped attributes?

2011-06-29 Thread Jason Denning
Hi All,
I am building a Pyramid app using SQLAlchemy for the model, and I
would like to be able to use the attributes as full-fledged objects
(i.e., I would like to be able to define methods attached to the
attributes), ideal example usage (although this is a somewhat
contrived example):

class Phone(Base):
id = Column(Integer, primary_key=True)
phone = Column(MyPhoneType)

p1 = Phone(phone=9995551212)
session.add(p1)
p1.phone
'9995551212'
p1.phone.prettyPrint()
(999) 555-1212
p1.phone.foo()
Now doing foo...

I have made some attempts towards this end using TypeDecorator and by
trying to extend UserDefinedType, but it seems that the instantiated
objects always have their mapped attributes converted into regular
python types (string, unicode, etc..) losing whatever methods/class
level stuff I try to define.

I am not trying to affect any aspect of the descriptor protocol,
object management, database interaction or any other ORM type stuff -
I just want some helper methods, and maybe some instance-level
attributes to be available for certain types of mapped attributes.

Is this possible?  Should I be doing this using TypeDecorator /
UserDefinedType / other ?  Is anyone else attempting this type of
thing, or am I totally crazy?

Thanks,
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] execute at beginning of every session

2011-02-16 Thread Jason McKellar
On Wed, Feb 16, 2011 at 11:51 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Feb 16, 2011, at 10:22 AM, Landreville wrote:
  Is there a way to run a command at the beginning of every session,
  before anything else is run?

 SessionExtension has an after_begin() hook for this purpose.



That is perfect -- except maybe it's too soon. Is there a way to run
something before the first non-select statement in the session?

The reason being I am running Pylons with repoze.what and repoze checks the
database for the username and then the controller action is run using the
same transaction that repoze.what looked up the username in.

If this isn't possible I will figure out a way to end the transaction
repoze.what uses.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] association_proxy as property?

2010-11-16 Thread jason kirtland
On Tue, Nov 16, 2010 at 9:05 AM, A.M. age...@themactionfaction.com wrote:

 On Nov 16, 2010, at 11:43 AM, Michael Bayer wrote:


 On Nov 16, 2010, at 11:14 AM, A.M. wrote:
 To generate json from our SQLAlchemy model objects, we are using 
 iterate_properties to determine how to dictify the object. One of our 
 objects uses association_proxy which we would like to represent in the 
 JSON. Unfortunately, because it is not a property, the dictification misses 
 this property. I feel like I am missing something simple here.

 How can I make an association_proxy appear to be a property which appears 
 in iterate_properties of the mapper?

 2. implicit conversion to JSON and such is a little sloppy.   You'd be 
 better off using a structured approach like Colander: 
 http://docs.repoze.org/colander/

 It looks like I would have to either re-define all objects using the Colander 
 syntax or implement a method which converts existing SQLAlchemy models to 
 Colander schema objects. Even if the latter function already exists, I still 
 have the problem of determining automatically which properties to encode, no?

You may find you'll need to do even further work to determine which
properties to encode.  I do the same (using Flatland for
serialization), and part of that challenge was determining where the
edges of the business objects were.  (If you have relations, maybe
some of them are part of the object (as user's email addresses) and
some of them aren't (a User-Users list of the user's friends). In the
end I went with a combination of class annotation and heuristics based
on iterating mapper properties.  This allowed me to traverse the
mappings to reliably find the edges and also include the occasional
transient attribute or other oddball that needed to be in the
serialized form.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Inheriting custom collection to create another custom collection. Issues with the appenders/parents

2010-11-13 Thread jason kirtland
Hi Hector,

On Fri, Nov 12, 2010 at 7:46 AM, Hector Blanco white.li...@gmail.com wrote:
 Hello everyone.

 I was wondering if it's possible to inherit a custom collection to
 create another custom collection.

 A few days ago I was trying to use my own class as a custom_collection
 (http://groups.google.com/group/sqlalchemy/msg/ba1c64c3d227f586).
 Thanks to Michael Bayer I was able to do it, but now I would like to
 go one step further, and inherit my custom collection to create
 another custom collection.

 To simplify a little what I asked in the other message, let's say I have a:

 def ClassA(declarativeBase):
        __tablename__ = aes
        id = Column(id, Integer, primary_key=True)
        _whatever = Column(type, String(64))
        def __init__(self):
                self._whatever = whatever

 Then I have my custom collection for instances of ClassA:

 def ContainerOfA(dict):
        __emulates__ = set
        def __init__(self):
                self._field = I'm a great... awesom! container

        #I also defined the appender, remover and iterator
       �...@collection.iterator
        def __iter__(self):
                return self.itervalues()

       �...@collection.appender
        def append(self, item):
                self[item.getUniqueHash()] = item

       �...@collection.remover
        def remove(self, item):
                if item.getUniqueHash() in self.keys():
                        del self[item.getUniqueHash()]

 And then I was happily able to use it in any relationships:

 def YetAnotherClass(declarativeBase):
        id = Column(id, Integer, primary_key=True)
        classesA = relationship(ClassA,
                uselist=True,
                secondary=intermediate_table,
                collection_class=lambda: ContainerOfA(),
                cascade=all, delete, delete-orphan,
                single_parent=True
        )

 Now I needed to extend ClassA in a Class B and ContainerOfA in
 ContainerOfB. I added the polymorphic stuff to ClassA and ClassB
 to create a joined table inheritance, as detailed in
 http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance
 . (it seems to be working fine, that's why I am not completely
 detailing it here)

 def ClassB(ClassA):
        __tablename__ = bs #Sorry for that
        __mapper_args__ = {'polymorphic_identity': 'ClassB'}
        id = Column(id, Integer, ForeignKey('aes.id'), primary_key=True)
        def __init__(self):
                self._anotherWhatever = another whatever

 def ContainerOfB(ContainerOfA):
        def __init__(self):
                super(ContainerOfB, self).__init__()
        def anotherMethodOnlyForBInstances(self):
                # do interesting stuff for B classes

 Then I tried to use it in a relationship:

 def YetYetAnotherClass(declarativeBase):
        id = Column(id, Integer, primary_key=True)
        classesB = relationship(ClassB,
                uselist=True,
                secondary=another_intermediate_table,
                collection_class=lambda: ContainerOfB(),
                cascade=all, delete, delete-orphan,
                single_parent=True
        )

 But when I tried to append a ClassB instance through the
 relationship detailed above, I got this exception:

 Type ContainerOfB must elect an appender method to be a collection class

I haven't been able to replicate this behavior.  When testing your
code I did notice that you are using 'def' to declare your classes,
which won't actually create the type.  I make that same typo myself
periodically and it can be quite tricky to track down the one def'd
class that's causing seemingly unrelated errors.

Anyhow, I've attached the working test case I put together.  If you
can modify this to replicate your behavior, we can track down any bugs
that might be present in the collection API's appender metadata
bookkeeping.  You definitely should not have to re-declare an
@appender on a subclass- the collection mechanics should be sweeping
over your inherited class and transparently picking up the methods.
This is definitely working for the cases in the SQLA unit tests, but
it's definitely possible you've found some corner case with that dict
that's declared to be emulating a set.

Cheers,
Jason

 I thought... ok, ok... let's just explicitly add the 'appender' to
 the ContainerOfB class...  The only thing I need to do is calling the
 appender of the super class, anyway... no biggie and so I did:

 def ContainerOfB(ContainerOfA):
        # [ . . . ] #
       �...@collection.appender
        def append(self, classBInstance):
                return super(ContainerOfB, self).append(classBInstance)

 But then... another exception when I tried to add an instance of ClassB():

 InvalidRequestError: Instance ClassB at 0xba9726c is already associated 
 with an instance of class 'mylibraries.classesTest.YetYetAnotherClass' 
 via its YetYetAnotherClass.classesB attribute, and is only allowed a single 
 parent.

 Well... I need

[sqlalchemy] Re: Is there a way to globally set DateTime string formatting?

2010-09-27 Thread Jason Baker
On Sep 24, 7:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 If you have a stringified date format that you'd like to pass to SQL 
 statements as values, such that the string is parsed into a datetime then 
 passed to the DBAPI, just use TypeDecorator around 
 DateTime:http://www.sqlalchemy.org/docs/core/types.html?highlight=typedecorato...

Yeah, but I can't make reflection use the TypeDecorator subclass
instead of DATETIME can I?  I can think of two options:

 1.  Go through each reflected table and replace each DATETIME type
with a TypeDecorator subclass.
 2.  Find all of the DATETIME columns, and implement a preprocessing
step for all of them.  So whenever I encounter a DATETIME value, I'd
convert it from a string into a Python datetime before passing it into
insert/update.

I don't necessarily like either of these options as I was hoping to
not have my code have to deal with these kinds of typing issues, but I
might be out of luck.

One other idea though:  is there any way to use the compiler extension
to do this?  For instance, could I add a @compiles function that
basically says if this is a datetime column then do this else do
whatever is done by default for columns?  The type compilation
capability almost does what I want, except it's for DDL rather than
DML.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Why isn't this Json column type working?

2010-09-19 Thread Jason Baker
On Sep 18, 9:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  You might after that also throw a pdb into the process_bind_param() method 
 and ensure that its being called

Sorry, I should have been more clear about this.  If I add a print
statement in process_bind_param it doesn't get executed.

 and as the ultimate sanity check info default.py do_execute().

Could you clarify what you mean by this?  Perhaps I'm being a bit
dense.  :-/

 I would also ensure you're on the latest production release of MySQLdb as 
 this error does seem faintly familiar, like there might have been issues with 
 the consumption of bind parameters on the MySQLdb side, or test with a 
 different DBAPI driver.

Yes, I'm presently running 1.2.3 which is the latest.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Why isn't this Json column type working?

2010-09-18 Thread Jason Baker
So I have a JSON column type that I wrote.  As it turns out, it's pretty
much the same as the one here[1]:

from json import dumps, loads

class Json(types.TypeDecorator, types.MutableType):
impl=types.Text

def process_bind_param(self, value, dialect):
return dumps(value)

def process_result_value(self, value, dialect):
return loads(value)

[1]
http://groups.google.ca/group/sqlalchemy/browse_thread/thread/81168f812c22f3f6

I put this into the following (declarative) class:

Base = declarative_base()

class SessionMixin(object):

This adds a property that allows accessing the session in an object.

_session = None
@classmethod
def set_session(cls, session):
cls._session = session

@property
def session(self):
if not self._session:
self.__class__._session = object_session(self)
return self._session


class Content(Base, SessionMixin):
__tablename__ = 'content'
id = Column(Integer, primary_key=True)
entity = Column(String)
handle_value = Column(String)
cms_guid = Column(String, default=uuid4)
last_change_guid = Column(String, ForeignKey('changes.change_guid'))
data = Column(Json)
export_date = Column(BigInteger)

last_change = relationship('Change')


I get the following error when I try to run it (with some sensitive data
left out):

Traceback (most recent call last):
  File /Users/jason/.envs/main/bin/epfimport, line 8, in module
load_entry_point('epf-import==0.1dev', 'console_scripts', 'epfimport')()
  File /Users/jason/src/epf-import/src/epf_import/run.py, line 35, in main
load_rows(fname)
  File /Users/jason/src/epf-import/src/epf_import/run.py, line 71, in
load_rows
Session.commit()
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/scoping.py,
line 132, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 595, in commit
self.transaction.commit()
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 367, in commit
self._prepare_impl()
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 351, in _prepare_impl
self.session.flush()
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 1359, in flush
self._flush(objects)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/session.py,
line 1440, in _flush
flush_context.execute()
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/unitofwork.py,
line 299, in execute
rec.execute(self)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/unitofwork.py,
line 443, in execute
uow
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/orm/mapper.py,
line 1863, in _save_obj
execute(statement, params)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1194, in execute
params)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1274, in _execute_clauseelement
return self.__execute_context(context)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1305, in __execute_context
context.parameters[0], context=context)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1404, in _cursor_execute
context)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1397, in _cursor_execute
context)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/sqlalchemy/engine/default.py,
line 299, in do_execute
cursor.execute(statement, parameters)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/MySQLdb/cursors.py,
line 174, in execute
self.errorhandler(self, exc, value)
  File
/Users/jason/.envs/main/lib/python2.6/site-packages/MySQLdb/connections.py,
line 36, in defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.ProgrammingError: (ProgrammingError) (1064, 'You have an
error in your SQL syntax; check the manual that corresponds to your MySQL
server version for the right syntax to use near \'\'download_size\':
\'235905\', \'application_id\': \'292412992\', \'copyright\': \'200\'
at line 1') 'INSERT INTO changes (change_guid, rev, op, entity,
transaction_guid, cms_guid, data) VALUES (%s, %s, %s, %s, %s, %s, %s)'
('28e0b2703494457c8f4a09b082274cb6', None, 'INSERT', 'itunes_application',
'34872a5ee7e5427199fa3a7efb026f97', '3df874eed48740ce9d54ec6181827956',
{...})

It looks like the Json column type is getting bypassed and the dictionary is
getting converted directly into a string.  Can anyone shed some light on
what I'm doing wrong (and more importantly how to fix it)?

I'm using sqlalchemy 0.6.4 under Python 2.6.1 on OS X.

-- 
You received

[sqlalchemy] Slides from the Advanced SQLAlchemy Customization tutorial at EuroPython

2010-07-19 Thread jason kirtland
Fellow Alchemers,

I've posted the slides and code from the Advanced SQLAlchemy
Customization tutorial I presented yesterday at EuroPython 2010 in
Birmingham.  Enjoy!

http://discorporate.us/jek/talks/#d2010-07-18

Talk description: http://www.europython.eu/talks/talk_abstracts/#talk67

Cheers,
Jason

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: blocking on Base.metadata.drop_all() in test

2010-07-08 Thread jason kirtland
Your scoped session still has an active connection, which is blocking
the drop.  Call session.remove() before the drop, or configure the
session with expires_on_commit=False to not issue SELECTs to fetch
object state after the final commit().

On Thu, Jul 8, 2010 at 9:27 AM, zende mtam...@gmail.com wrote:
 I reproduced the issue the script below:
 http://gist.github.com/468199

 Sorry for the weak explanation before.  This has little to do with
 being in tests except that's the only code that drops and creates
 the db for any reason.  Ctrl-C does nothing when it blocks.

 Chris, try running the script in the link, and let me know if you are
 able to reproduce the issue

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Help with optimizing

2010-06-02 Thread Jason Baker
On May 30, 8:39 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 OK well by re018792aea57 I've bumped the callcounts down a *little*, reducing 
 compiler overhead within the flush().     Perhaps it will be a bigger 
 difference for your app which seems to be heavy on flush() calls.

As it turns out, that change didn't help much at all.  :-/

However, performance might not be as big an issue as I suspected.  It
turns out the code I've been working with is now much faster since
using SQLAlchemy.  We are perhaps taking a bit more CPU time than we
were before, but SQLAlchemy has given us much greater flexibility to
write efficient queries.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Help with optimizing

2010-05-28 Thread Jason Baker
On Fri, May 28, 2010 at 3:48 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 This is the ORM affecting 124K statements so must be a very large data
 persist (it seems like a persist heavy operation, i see 287K objects total
 treated as part of units of work).

 It seems like you are calling commit() a very large number of times.   So
 the most obvious enhancement would be to call this a whole lot less - the
 commit necessitates a flush, and also by default fully expires the session
 (unless you turn off expire_on_commit), meaning all rows have to be fully
 reloaded, which is probably making the number of statements executed much
 larger than it needs to be.


Unfortunately committing less isn't doable.  :-(

We need to have a transaction for each record we process otherwise, we'd
have to go back and clean up if something failed.  I will try turning off
expire_on_commit and see if that helps though.


 A good deal of time is spent in compiling constructs into SQL strings here,
 there is a feature whereby this can be cut down dramatically for similar
 statements executed many times which is the compiled_cache execution
 option.   The ORM uses this a bit internally now though it might be a nice
 feature for you to be able to switch it on for a given session, and have all
 SQL statement compilation cached for the life of that session.   This
 feature can be approximated right now though I'd want to modify _save_obj to
 not overwrite the cache with its own, which defeats the usage of a
 session-spanning compilation cache.


That would be a *huge* help.  Would this involve a custom Query subclass?

-- 
Jason Baker
Developer
ZeOmega
3010 Gaylord Parkway, Suite 210
Frisco, TX 75034
O:  214-618-9880  ext 8024
jba...@zeomega.com
www.ZeOmega.com
Proven. Progressive. Partner.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sql alchemy-Oracle Error

2010-05-05 Thread Jason Baker
On Wed, May 5, 2010 at 7:42 AM, dhanil anupurath
dhanilanupur...@gmail.comwrote:

 Hi

 I am using sqlalchemy in my TurboGears application.
 some of my classes has columns with Pickletype dataType.
 these get converted to BLOB dataType in the database.
 I was using mySql till recently and everything was working fine.

 Now i am shifting to oracle. Tables are getting created properly.
 (setup-app in tg project ran successfully). But when i try to query
 a table having BLOB column ,  i get the following error.
 Exception in thread Thread-22:
 Traceback (most recent call last):
  File /usr/lib/python2.4/threading.py, line 442, in __bootstrap
self.run()
  File /svnee/trunk/src/core/services/task_service.py, line 76, in
 check_calendar_tasks
for cal in conn.query(Task).\
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/orm/query.py, line 1361, in __iter__
return self._execute_and_instances(context)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/orm/query.py, line 1364, in
 _execute_and_instances
result = self.session.execute(querycontext.statement,
 params=self._params, mapper=self._mapper_zero_or_none())
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/orm/session.py, line 754, in execute
return self.__connection(engine, close_with_result=True).execute(
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/engine/base.py, line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/engine/base.py, line 874, in
 _execute_clauseelement
return self.__execute_context(context)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/engine/base.py, line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
 context.parameters[0], context=context)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/engine/base.py, line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
 context)
  File /root/tg2env/lib/python2.4/site-packages/SQLAlchemy-0.5.6-
 py2.4.egg/sqlalchemy/engine/base.py, line 931, in
 _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 DatabaseError: (DatabaseError) ORA-01036: illegal variable name/number


 i did a normal select query from sqlplus:

 select * from tasks;
 ORA-00932: inconsistent datatypes: expected NUMBER got BLOB

 These are my class and table definitions:

 class Task(DeclarativeBase):
task_id = Column(Integer,Sequence('id_seq'), primary_key=True)
task_type = Column(Unicode(50), default=to_unicode('Task'))
name = Column(Unicode(256))
entity_id = Column(Unicode(256))
entity_name = Column(Unicode(50))
context = Column(PickleType)
params = Column(PickleType)
kw_params = Column(PickleType)
processors = Column(ImmutablePickleType)


 _tablename_=tasks
TASK_ID  Number
TASK_TYPEVarchar2
NAME Varchar2
ENTITY_IDVarchar2
ENTITY_NAME  Varchar2
CONTEXT  Blob
PARAMS   Blob
KW_PARAMSBlob
PROCESSORS  Blob
USER_NAME   Varchar2
SUBMITTED_ON   Date 7
REPEATING   Number


Any chance you could send the query that's generating the python traceback?
You should be able to see the generated queries if you send echo=True in the
create_engine arguments.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] session lifecycle and wsgi

2010-04-28 Thread jason kirtland
On Wed, Apr 28, 2010 at 7:52 AM, Chris Withers ch...@simplistix.co.uk wrote:
 Diana Clarke wrote:

 Finally, we're using pylons and are removing the contextual session in the
 finally clause of the base controller's __call__ method.

 class BaseController(WSGIController):

    def __call__(self, environ, start_response):
        try:
            ...
        finally:
            session.remove()

 Yeah, I'm trying to find out if this .remove() is actually necessary.

.remove() as the final operation in a request ensures that no session
state leaks from one web request to another. The next request in that
thread or scoping context will get an entirely fresh session to work
with.

If finishing with a .remove() is a big deal in your environment, which
it seems like it is, you could do a .remove() at the start of the
request instead.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] session lifecycle and wsgi

2010-04-28 Thread jason kirtland
On Wed, Apr 28, 2010 at 8:55 AM, Chris Withers ch...@simplistix.co.uk wrote:
 jason kirtland wrote:

 On Wed, Apr 28, 2010 at 7:52 AM, Chris Withers ch...@simplistix.co.uk
 wrote:

 Diana Clarke wrote:

 Finally, we're using pylons and are removing the contextual session in
 the
 finally clause of the base controller's __call__ method.

 class BaseController(WSGIController):

   def __call__(self, environ, start_response):
       try:
           ...
       finally:
           session.remove()

 Yeah, I'm trying to find out if this .remove() is actually necessary.

 .remove() as the final operation in a request ensures that no session
 state leaks from one web request to another. The next request in that
 thread or scoping context will get an entirely fresh session to work
 with.

 Okay, would .close() be equivalent here?

Not really, .close is a Session method.  See below.

 If finishing with a .remove() is a big deal in your environment, which
 it seems like it is, you could do a .remove() at the start of the
 request instead.

 What happens if you call .remove() on a virgin session?

.remove() is specific to the ScopedSession container.  It's not a
Session method.  It will .close() the session for the current scope,
if any (which is effectively a no-op if there is a session but it
hasn't performed any work), then remove that session from the scope.
The next access to the ScopedSession container will produce a fresh
session.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Mapping dictionaries with string keys and record values

2010-04-26 Thread jason kirtland
On Mon, Apr 26, 2010 at 8:24 AM, Michael Bayer mike...@zzzcomputing.com wrote:
 Torsten Landschoff wrote:

 Thanks for your reply and the remarks! Do you plan to extend
 attribute_mapped_collection to update the key like in my example?

 here's some things to note on that:

 1. I'm not sure why it wasn't that way already, and I'd want to hear from
 Jason Kirtland, its author, on if we are missing something or otherwise
 whats up.   I have a vague notion that there was a reason for this, or
 maybe not.

It's not something that can be easily solved in the general case with
the current API. The mapped collections use a 'keyfunc' to figure out
the dictionary key for loaded instances, for example
'operator.attrgetter(name)' for attribute_mapped_collection(name).
 Mechanically reversing that logic in a setting operation sounds
pretty hard to me, but perhaps if we allowed an 'assignfunc' function
to be supplied that would do the trick.  Internally, the collection
code would call it during a dict['key'] = instance assignment
operation, maybe passing just the key value and the instance:

  def assignfunc(key, instance):
  instance.name = key

For highly constrained types like the attribute- and column-mapped
collections, these functions would be easy to generate.

A good test for the feature would be a mapped collection that maps a
tuple of attributes, such as one created by
attribute_mapped_collection(('x', 'y')).  Assigning collection[1,
2] = instance should assign both instance.x and instance.y in that
case.

 2. I wonder if there's a way to make this happen more deeply than within
 setattr().   Like the collection internals would include an event to
 operate upon the target object that includes the other args from the
 collection decorator.

I have a hunch this is only meaningful for mapped collections-
mutations like list_collection[2:5] would be difficult to translate
and I'm not sure what information one would want to capture there.
Worth a look though.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Mapper table properties

2010-04-23 Thread Jason Baker
Given an entity class (or entity instance), I'd like to get the table that
is mapped to it.  If I get the mapper using object_mapper/class_mapper, then
I get a mapper with the following properties defined (among others):
local_table, mapped_table, and tables.  Can someone help me understand what
the difference between each of these properties is (or point me towards
documentation on them)?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Can I log queries from where they're created?

2010-04-06 Thread Jason Baker
On Mon, Apr 5, 2010 at 5:55 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 I'm unclear on what usage you'd like to see.


What I'm trying to do would be roughly equivalent to this:

def query_foo(self):
logger = logging.getLogger(__name__ + '.' + self.__class__.__name__)
query = self.session.query(Foo)
logger.debug(query)
logger.debug(query.compile().params)
return query.all()

However, this is pretty unmanageable.  The above method could be reduced to
one line of code if not for the logging boilerplate.  So what I'd like to do
is somehow be able to do the above, but without the logger boilerplate.

The main reason for doing this is to have queries logged to where they're
used.  This is really useful because I'm using nose to run tests, and it
would really be nice to be able to filter out all queries except the ones
that are generated by a particular class or module.



 Are you looking for all queries to do this (use Session with query_cls)?


The only way to subclass query to do what I'm looking for would be to
override every method that actually executes the query to log the query and
then execute it.  Is there a better way to do it that I'm not thinking of?

-- 
Jason Baker
Developer
ZeOmega
3010 Gaylord Parkway, Suite 210
Frisco, TX 75034
O:  214-618-9880  ext 8024
jba...@zeomega.com
www.ZeOmega.com
Proven. Progressive. Partner.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] envbuilder setup for sqlalchemy

2010-04-06 Thread Jason Baker
Hello,

Just in case anyone can use this, I've created a sample envbuilder file that
will create a virtualenv with sqlalchemy installed from hg.  The file is
here:
http://github.com/jasonbaker/envbuilder/blob/master/examples/sqlalchemy/.env

To use it:

 1. Install the development version of envbuilder (easy_install
envbuilder==dev)
 2. Put the above .env file in a directory somewhere.
 3. Do envb checkout from the command-line.  If you don't have git
installed, do envb checkout -p sqlalchemy instead.
 4. Do envb setup.

And you now have a virtualenv set up with the development version of
sqlalchemy.  A couple of other things you can do:

run tests - envb test
update sqlalchemy - envb pull

I'm looking for ways to help make this fit into peoples' workflows, so if
you have any suggestions or questions let me know!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Can I log queries from where they're created?

2010-04-05 Thread Jason Baker
Here's what I'd like to do.  Suppose I have some class Foo in
some_module.py:

class Foo(Entity):
def all_foos(self):
return self.session.query(Foo).all()

...sqlalchemy by default will log this to sqlalchemy.engine.  However,
this makes it a bit difficult to narrow down what query is coming from
where.  What I would like to have happen is that this is logged to the
name some_module.Foo (or some_other_module.Bar if it comes from a
different class).

The best approach that I've come up with involves a query option and a
decorator:


class LoggerOption(MapperOption):

This is an option that may be passed in to sqlalchemy's
Query.options.  This will make the query be logged to
any arbitrary namespace. Example::

session.query(Mbr).options(LoggerOption('foo')

The above query will be logged on the logger returned
by logging.getLogger('foo').

def __init__(self, name):
self.name = name

propagate_to_loaders = True

def process_query(self, query):
logger = logging.getLogger(self.name)
statement = query.statement
logger.debug(str(statement))
logger.debug(str(statement.compile().params))

def query(attr_name):
def _query(func):
def _run_query(self, *args, **kwargs):
query = func(self, *args, **kwargs)
query =
query.options(LoggerOption(self.__class__.__module__ + '.' +
 
self.__class__.__name__))
attr = getattr(query, attr_name)
return attr()
_run_query.__doc__ = func.__doc__
return _run_query
return _query

...so I could rewrite Foo like this:

class Foo(Entity):
@query('all')
def all_foos(self):
return self.session.query(Foo)

This works, but it feels like there should be a better way than having
to attach a decorator to every method and having to return a query
from that method.  Is there any better way to do this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] error handling for sessionmaker function

2010-01-11 Thread jason kirtland
On Mon, Jan 11, 2010 at 4:07 PM, Manlio Perillo
manlio.peri...@gmail.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi.

 I'm updating some of my code to SQLALchemy 0.6, and I have noted a
 problem with the sessionmaker function.

 The problem is a compatibility one: old versions use the transactional
 parameter, new ones the autocommit parameter.

 Usually, to handle these problems I use the try/except method:

 try:
    return orm.sessionmaker(bind=bind, autocommit=autocommit)
 except TypeError:
    # COMPAT: SQLAlchemy 0.4.x, deprecated in 0.5.x
    transactional = not autocommit
    return orm.sessionmaker(bind=bind, transactional=transactional)


 However this does not work, since error is raise only ewhen the actual
 Session instance is created.


 As far as can understand, the sessionmaker function supports keyword
 arguments since user can specify a custom session class to use.

 Can error handling be improved?

How about:

try:
orm.create_session(autocommit=autocommit)
except TypeError:
# COMPAT: SQLAlchemy 0.4.x, deprecated in 0.5.x
transactional = not autocommit
return orm.sessionmaker(bind=bind, transactional=transactional)
else:
return orm.sessionmaker(bind=bind, autocommit=autocommit)

Creating and disposing a session via create_session() in this way
isn't particularly expensive and won't initiate any database
connections or activity.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Modify the start number of a sequence after it's been created

2009-12-16 Thread Jason R. Coombs
Here's what I do.

First, I get the sequence f rom the table's columns

seq = table.c.id.default
# or if you have a mapper-class
# seq = MyClass.id_property.property.columns[0].default

Then, I execute that sequence to advance the sequence until it's the
value needed.

engine.execute(seq)

The reason I only execute the sequence once is because in my use case,
I know I've just inserted one record manually, so I only want to
advance the sequence once.

I think in your case, after getting the sequence, I would do something
like the following:

column = table.c.id
# or if you have a mapper
# column = MyClass.id_property.property.columns[0]
seq = column.sequence
# get the max ID
maxid = session.query(func.max(column)).one()[0]
# repeatedly advance the sequence until it's greater or equal than the
maxid
while seq.execute()  maxid: pass


I don't think this is the best solution, but it's the best thing I've
come up with so far with my limited knowledge.

Good luck.

On Nov 16, 12:56 pm, Jaume Sabater jsaba...@gmail.com wrote:
 Hello everyone!

 I've been searching information regarding this topic but without help
 so far. I have a dbloader.py script that takes information from a
 number of CSV files and dumps it into the database. After that we
 execute a number of tests to check all services are working fine
 (returning the right information and calculated data such as
 availability and so on).

 In these CSV files I have manually set up the id fields of every
 table, but when I add the rows to the database, I am passing the id
 value as part of the dictionary, therefore the sequence of the primary
 key is not updated. So, after all data has been loaded, I'd like to
 update the start value of the sequence.

 It's PostgreSQL, therefore I could do something like:

 maxid = engine.execute('select max(id) from table').fetchone()[0]
 engine.execute(select setval('table_id_seq', %s) % (maxid + 1))

 But I'd like to do this properly, at SA level. Just in case database
 changes in the future or just to satisfy myself. But I have not been
 able to find the way to modify the attributes of the sequence of each
 table I find in the metadata. I have found ways of retrieving the
 nextid in the documentation, but not modifying it.

 Is there a way to grab a sequence and set it's next value using the
 abstractions SA facilitates?

 Thanks in advance.

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] KeyError in getting session_identity_map[identitykey]

2009-12-11 Thread Jason
I'm seeing a keyerror in mapper.py where it does:
instance = session_identity_map[identitykey]

The strange this is, I see the code doing a check to make sure
identitykey is in the session_identity_map.
This is a table that's not being updated and it pretty much static.
Could another thread be removing the key?
Any help would be greatly appreciated.

Jason

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: KeyError in getting session_identity_map[identitykey]

2009-12-11 Thread Jason
Thanks Michael.
This is happening intermittently, and the identitykey check is
happening just before trying to get it from the session_identity_map
in mapper.py.
I'm not putting anything in there myself, it's happening in the query
call.
So it seems to me the most likely culprit is another process/thread?

On Dec 11, 11:34 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Jason wrote:
  I'm seeing a keyerror in mapper.py where it does:
  instance = session_identity_map[identitykey]

  The strange this is, I see the code doing a check to make sure
  identitykey is in the session_identity_map.
  This is a table that's not being updated and it pretty much static.
  Could another thread be removing the key?
  Any help would be greatly appreciated.

 SQLA does not create any sessions or spawn any threads, so any confusion
 between multiple sessions or concurrent threading activity would originate
 from outside of SQLA.  I would advise to check that you're looking for the
 right key in identity_map and also note that you can't just put things
 in the map directly and expect it to work - use Session.add() for that.





  Jason

  --

  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: MySQL has gone away (again)

2009-06-08 Thread jason kirtland
Kamil Gorlo wrote:
 On Thu, Jun 4, 2009 at 4:20 PM, Michael Bayermike...@zzzcomputing.com wrote:
 the connection went from good to dead within a few seconds (assuming SQL
 was successfully emitted on the previous checkout).   Your database was
 restarted or a network failure occurred.
 
 There is no other option? I'm pretty sure that DB was not restarted,
 network failure is of course possible but still.. (this is the same
 LAN).

Another cause of went away messages is a query that exceeds the 
configured memory resources on the server.  Taking a look at MySQL's 
logs may shed more light  give hints for which buffers need tuning if 
that's the problem.

 But, assuming this is external problem - is there any way to tell
 SQLAlchemy to try another connection for the same request (instead of
 returning HTTP 500 for user), or maybe other pooling strategy or even
 something else?

Yes, with a simple pool event listener you can ensure the liveliness of 
connections before the pool hands them out for use.  Usage example is 
attached.

Cheers,
Jason

class LookLively(object):
 Ensures that MySQL connections checked out of the
pool are alive.

 def checkout(self, dbapi_con, con_record, con_proxy):
 try:
 try:
 dbapi_con.ping(False)
 except TypeError:
 dbapi_con.ping()
 except dbapi_con.OperationalError, ex:
 if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
 # caught by pool, which will retry with a new connection
 raise exc.DisconnectionError()
 else:
 raise


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import exc


class LookLively(object):
Ensures that MySQL connections checked out of the pool are alive.

Specific to the MySQLdb DB-API.  Note that this can not totally
guarantee live connections- the remote side can drop the connection
in the time between ping and the connection reaching user code.

This is a simplistic implementation.  If there's a lot of pool churn
(i.e. implicit connections checking in and out all the time), one
possible and easy optimization would be to add a timer check:

1) On check-in, record the current time (integer part) into the
   connection record's .properties
2) On check-out, compare the current integer time to the (possibly
   empty) record in .properties.  If it is still the same second as
   when the connection was last checked in, skip the ping.  The
   connection is probably fine.

Something much like this logic will go into the SQLAlchemy core
eventually.

-jek


def checkout(self, dbapi_con, con_record, con_proxy):
try:
try:
dbapi_con.ping(False)
except TypeError:
dbapi_con.ping()
except dbapi_con.OperationalError, ex:
if ex.args[0] in (2006, 2013, 2014, 2045, 2055):
raise exc.DisconnectionError()
else:
raise

# To see a connection die post-ping, take the sleep out of reap()
# below and run this in a tight loop.  It should happen eventually on
# a fast machine.
#
#   $ while thisscript.py; do echo; done

if __name__ == '__main__':
import sys, time
if len(sys.argv)  1:
from pkg_resources import require
require('mysql-python==%s' % sys.argv[1])

from sqlalchemy import *
e = create_engine('mysql:///test?unix_socket=/var/tmp/mysql.sock',
  max_overflow=0, pool_size=2, # constrain our test
  listeners=[LookLively()])

# reserve a connection.
reaper = e.connect()
def reap(id):
reaper.execute(text('KILL :thread'), thread=id)
time.sleep(0.15)  # give the thread a chance to die

c2 = e.connect()
c2_threadid = c2.scalar('SELECT CONNECTION_ID()')
print Connection c2 id=%s % c2_threadid

# return c2 to the pool.  (the db-api connection will remain open)
c2.close()
del c2

reap(c2_threadid)

c2 = e.connect()
new_threadid = c2.scalar('SELECT CONNECTION_ID()')
print Connection c2 now has id=%s % new_threadid

try:
# connection is still alive, kill it mid-stream
reap(new_threadid)
c2.execute('SELECT 1')
assert False
except Exception, ex:
print Expected: Did not reconnect mid-transaction, exception:, ex

c2 = e.connect()
final_threadid = c2.scalar('SELECT CONNECTION_ID()')
print Connection c2 now has id=%s % final_threadid

sys.exit

[sqlalchemy] Re: moving an object

2009-04-05 Thread jason kirtland

jean-philippe dutreve wrote:
 Hi all,
 
 I wonder if SA can handle this use case:
 
 An Account can contain Entries ordered by 'position' attribute.
 
 mapper(Account, table_accounts, properties = dict(
 entries = relation(Entry, lazy=True, collection_class=ordering_list
 ('position'),
 order_by=[table_entries.c.position],
 passive_deletes='all', cascade='save-update',
 backref=backref('account', lazy=False),
 ),
 ))
 
 I'd like to move an entry from accountA to accountB and let SA remove
 the link between the entry and accountA:
 
 entry = accountA.entries[0]
 insort_right(accountB.entries, entry)
 assert not entry in accountA.entries# false, entry is still in
 accountA 
 
 It is possible?

Try removing the entry from accountA:

 entry = accountA.pop(0)
 ...

Also beware that bisect insort has a bug that prevents it from working 
properly with list subclasses like ordering_list (or any SA list-based 
collection).  I think it's fixed in Python 3.0, not sure if the fix was 
backported to 2.x.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree

2009-03-10 Thread jason kirtland

It'd look like this:

http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/setup.py

Your dialect will be available to SA after you 'python setup.py install' 
or 'python setup.py develop' in your -ase distribution.

phrrn...@googlemail.com wrote:
 Thanks Mike. This sounds great although I have to admit that I don't
 follow it completely as I have not used authored anything via
 setuptools. If this is trivial for you, could you sketch out what this
 would look like?
 
 pjjH
 
 
 On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote:
 you can install the dialect using a setuptools entry point.  SQLAlchemy
 looks for dialect modules using the sqlalchemy.databases entry point
 name, so in this case you might name it sqlalchemy.databases.sybase-ase.

 phrrn...@googlemail.com wrote:

 Hi,
 I have done a fair bit on a Sybase ASE dialect for Alchemy and it is
 now in a primitive but usable condition for simple applications. My
 employers are fine with contributing the code back to the project and
 I intended to coordinate with Mike Bayer about this shortly. In the
 meantime, we would like to deploy the driver locally and work out some
 of the bugs. Ideally, we would like to do this separately from our
 centralized SQL Alchemy installation as the release cycles for
 production s/w are much longer than the anticipated cycles for the
 Sybase dialect.Is it possible to use a dialect located  outside the
 main installation by something as simple as the connection URI?
 Have any of you similar situations? Have you any suggestions on ways
 to address this issue?
 pjjH
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problems/Bug in ordering_list (UNIQUE KEY violation)

2009-02-26 Thread jason kirtland

Michael Bayer wrote:
 
 
 On Feb 19, 2009, at 4:33 PM, oberger wrote:
 
 Thank you Michael,

 but I am not able to bring this to work. Even with a flush and a
 commit after every Statement.
 I understand the problem with dependend UPDATES/DELETES.

 But how is the ordering_list suposed to work?
 When I delete on entry with: del short_trip.trip_stops[1]

 and then flush() and commit(). The ordering_list has to do some
 work in the corresponding database table.
 
 im not sure, perhaps Jason can chime in on this

For this constraint configuration you might try making the DB constraint 
initially deferred.  Given the ordering of statement execution in the 
unit of work, no other ideas are coming to mind.  The ordering_list 
itself is totally ignorant of the ORM.  It doesn't issue any flushes or 
deletions, though one could make an implementation that did embed that 
level of control over the unit of work.

-j


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to handle Date values of the form YYYY-MM-00

2009-02-01 Thread jason kirtland

rdmur...@bitdance.com wrote:
 I have an existing MySQL database (that I do not control) with schema
 fields defined using the 'Date' type.  The values that occur in these
 fields often have a 'day' of '00', and sometimes a month of '00', and
 sometimes the field's value is -00-00.  The zeros are used to indicate
 don't know (or, sometimes, don't care).
 
 Since '00' is invalid for the fields in a Python DateTime, it seems as though
 I can't actually use DateTime to manage these values.  My application
 should be able to use them as strings, but how do I arrange to do that?
 The conversion to DateTime is presumably taking place at the DBAPI level.

Check out the MySQLdb docs for the 'conv' type mapping option to 
connect(). I think you should be able to override the default datetime 
with your own convert that falls back to a string or whatever you'd like 
it to do.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to handle Date values of the form YYYY-MM-00

2009-02-01 Thread jason kirtland

rdmur...@bitdance.com wrote:
 Quoth jason kirtland j...@discorporate.us:
 rdmur...@bitdance.com wrote:
 I have an existing MySQL database (that I do not control) with schema
 fields defined using the 'Date' type.  The values that occur in these
 fields often have a 'day' of '00', and sometimes a month of '00', and
 sometimes the field's value is -00-00.  The zeros are used to indicate
 don't know (or, sometimes, don't care).

 Since '00' is invalid for the fields in a Python DateTime, it seems as 
 though
 I can't actually use DateTime to manage these values.  My application
 should be able to use them as strings, but how do I arrange to do that?
 The conversion to DateTime is presumably taking place at the DBAPI level.
 Check out the MySQLdb docs for the 'conv' type mapping option to 
 connect(). I think you should be able to override the default datetime 
 with your own convert that falls back to a string or whatever you'd like 
 it to do.
 
 That sounds promising, and I doubt I would have found that just by googling,
 so thanks!
 
 Now, how do I get SQLAlchemy to pass that dictionary into the MySQLdb
 'connect'?  :)

You can pass it in via the create_engine's connect_args:

http://www.sqlalchemy.org/docs/05/dbengine.html#custom-dbapi-connect-arguments

Cheers,
Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: find only loaded objects in relation collections

2009-01-30 Thread jason kirtland

GHZ wrote:
 Hi,
 
 I have a Subscriber and an Address table.  Subscriber can have many
 Addresses
 
 
 mapper(Subscriber, subscriber_table, properties={
 'addresses' : relation(Address, collection_class=Addresses,
 backref='customer')})
 
 From the a Subscriber object, I want to inspect all loaded objects in
 any collections, but do it quietly - without causing any more to load.
 
 
 class MyBase(object):
 
 @reconstructor
 def __my_init__(self):
 self.rules = []
 
 def get_all_rules_on_all_loaded_related_objects(self):
 for collection in (p for p in object_mapper
 (self).iterate_properties if type(p) is RelationProperty):
 # How to access this collection without causing it to
 load?
 # I want to look at the 'rules' property on all loaded
 objects

The collection will be present in the instance's __dict__ if it has been 
loaded.  So something like

   if 'addresses' in self.__dict__:
  # loaded, can access self.addresses without triggering db access


-j

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: INSERT ... ON DUPLICATE KEY UPDATE

2009-01-23 Thread jason kirtland

camlost wrote:
 Thank you for the reply.
 
 However, this solution (though I'm ready to use it) would create a lot
 of SQL queries comparing it with simple INSERT ... ON DUPLICATE KEY
 UPDATE.
 On the other hand, I admit the INSERT ... IN DUPLICATE KEY UPDATE
 might not be available in other DBs. I would like the application
 would be independent of the database engine bellow.
 
 So... is there some way how to achieve this while keeping number of
 SQL queries low? :-)
 (The number of objects handled this way is about 20 000.)

Sure, if your process will be the only one inserting and changing these 
rows.  Working through your 20k python objects in batches of 1000 or 
whatever size you like, collect the key values from the python objects. 
  Run a database select to see which of those keys are present in the 
database, and then divide your batch into two parts: data needing insert 
and data needing update.

If you've got write contention for this data you'd need to work more 
granularly (likely row by row) instead, keeping in mind the database 
engine's transaction model and ideally taking advantage of any tools the 
db engine provides (like ON DUPLICATE or sql's MERGE) .  Performance and 
engine agnosticism may be mutually exclusive here.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: passing tuple argument into sqlalchemy.sql.text string

2009-01-21 Thread jason kirtland

Faheem Mitha wrote:
 On Wed, 21 Jan 2009 08:26:13 -0800, jason kirtland
 j...@discorporate.us wrote:
 
 Faheem Mitha wrote:
 Hi,

 I've got a query as follows:

 from sqlalchemy.sql import text

 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 
 'DUKE1_plateA_A11.CEL')
 )
 I want to pass in the tuple as an argument, and was wondering how to do 
 it.

 So, I'm looking for something conceptually like

 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN :plist
 )
  
 gq = conn.execute(gq, plist=('DUKE1_plateA_A10.CEL',
'DUKE1_plateA_A11.CEL'))
  
 Note, I want to pass in a tuple of arbitary length, so changing
 this to pass two string arguments would not do. Perhaps I'm
 supposed to pass in some bindparams too, but I don't know what type
 I should be using.
 
 IN takes a list of scalars, each of which requires its own :bind
 parameter.  On Postgresql you might find it more convenient to use
 ANY, which takes a single array argument.  WHERE
 cell.patient_chipid ANY (:plist)
 
 Thanks for the suggestion. Can such an array argument be passed in
 from Python?

Give it a try and let us know how it goes.

Cheers,
Jason


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL

2008-11-10 Thread jason kirtland

Simon wrote:
 Hi all,
 
 I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a
 table with a float column and would like to have a default value of 0:
 
 Column('col', Float(), default=0.0)
 
 However, executing metadata.create_all(engine) yields
 
 CREATE TABLE `Table` (
   ...
   `col` float default NULL,
   ...
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1
 
 Is that a bug, or am I erring somewhere?

default= is purely a client-side default executed in Python.  For a 
server-side (DDL) default, you want Column(, server_default='0.0')



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL

2008-11-10 Thread jason kirtland

With 0.4 it's a positional argument to Column: Column('col', Float(), 
PassiveDefault('0.0'))

Simon wrote:
 Thanks Jason! Is there any way of doing this in SA 0.4 as well?
 
 On 10 Nov., 16:42, jason kirtland [EMAIL PROTECTED] wrote:
 Simon wrote:
 Hi all,
 I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a
 table with a float column and would like to have a default value of 0:
 Column('col', Float(), default=0.0)
 However, executing metadata.create_all(engine) yields
 CREATE TABLE `Table` (
   ...
   `col` float default NULL,
   ...
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1
 Is that a bug, or am I erring somewhere?
 default= is purely a client-side default executed in Python.  For a
 server-side (DDL) default, you want Column(, server_default='0.0')
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key problem when using reflection and schemas

2008-10-27 Thread jason kirtland

That should be working now in r5203.  The reflection code was missing an 
edge case where an explicit schema= is the same as the connection's 
schema.  Switching those to schema=None should work as intended if you 
need a workaround on a released version.

Cheers,
Jason


Martijn Faassen wrote:
 Hi there,
 
 I have a problem with foreign keys that seems to occur when I combine 
 reflection and explicit schemas, in the context of MySQL. I've confirmed 
 this problem with both rc2 and the trunk. It's best demonstrated with 
 some failing code:
 
 Imagine the following MySQL database 'somedb':
 
 CREATE TABLE somedb.a (
id int PRIMARY KEY auto_increment NOT NULL
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 CREATE TABLE somedb.b (
id int PRIMARY KEY auto_increment NOT NULL,
a_id int NOT NULL,
FOREIGN KEY (a_id) REFERENCES somedb.a(id)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 And the following code:
 
 from sqlalchemy import *
 from sqlalchemy.orm import mapper, relation, sessionmaker
 
 engine = create_engine('mysql:///somedb')
 meta = MetaData()
 meta.bind = engine
 
 a_table = Table(
  'a',
  meta,
  schema='somedb',
  autoload=True)
 
 b_table = Table(
  'b',
  meta,
  schema='somedb',
  autoload=True)
 
 class A(object):
  pass
 
 
 class B(object):
  pass
 
 mapper(A, a_table,
 properties={'bs': relation(B)})
 mapper(B, b_table)
 
 Session = sessionmaker(bind=engine)
 session = Session()
 print session.query(A).all()
 
 When executing this code, the last line fails with the following error:
 
 Traceback (most recent call last):
File bin/devpython, line 138, in ?
  execfile(sys.argv[0])
File experiment.py, line 33, in ?
  print session.query(A).all()
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/session.py,
  
 line 914, in query
  return self._query_cls(entities, self, **kwargs)
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py,
  
 line 95, in __init__
  self.__setup_aliasizers(self._entities)
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py,
  
 line 109, in __setup_aliasizers
  mapper, selectable, is_aliased_class = _entity_info(entity)
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py,
  
 line 454, in _entity_info
  mapper = class_mapper(entity, compile)
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py,
  
 line 531, in class_mapper
  mapper = mapper.compile()
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py,
  
 line 371, in compile
  mapper.__initialize_properties()
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py,
  
 line 393, in __initialize_properties
  prop.init(key, self)
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/interfaces.py,
  
 line 384, in init
  self.do_init()
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py,
  
 line 531, in do_init
  self._determine_joins()
File 
 /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py,
  
 line 604, in _determine_joins
  raise sa_exc.ArgumentError(Could not determine join condition 
 between 
 sqlalchemy.exc.ArgumentError: Could not determine join condition between 
 parent/child tables on relation A.bs.  Specify a 'primaryjoin' 
 expression.  If this is a many-to-many relation, 'secondaryjoin' is 
 needed as well.
 
 This code *only* fails if I designate an explicit 'schema' in the table 
 statements. If I leave these out, things work as expected. Since I'm 
 interested in working with reflected tables that reside in multiple 
 schemas, this is a problem.
 
 Digging around indicates this that _search_for_join, defined in 
 _determine_joins, does not actually find the join clause. Going deeper 
 traces the failure down to the Join class in sqlalchemy.sql.expression, 
 which fails in self._match_primaries in its __init__ method. This in 
 turn brings us to sqlalchemy.sql.util.join_condition, which has 
 fk.get_referent() return None if schemas are explicitly specified, and 
 work fine if not.
 
 fk.get_referent() uses corresponding_column, and this in turn tries to 
 use contains_column() which returns False in the schema case, but true 
 if 'schema' is not explicitly verified.
 
 Why I don't know. The repr of the column passed into contains_column 
 looks the same as the repr of the column in the table, but apparently 
 it's not exactly the same instance. Something somewhere is making the 
 column to be different.
 
 Is this a bug? If so, how would we go around solving it?
 
 Regards,
 
 Martijn
 
 
 
 
  


--~--~-~--~~~---~--~~
You received this message because you

[sqlalchemy] Re: in_( bindparam(list) ) ?

2008-10-13 Thread jason kirtland

Michael Bayer wrote:
 
 On Oct 13, 2008, at 10:58 AM, [EMAIL PROTECTED] wrote:
 
 On Monday 13 October 2008 17:21:31 Michael Bayer wrote:
 On Oct 13, 2008, at 6:41 AM, [EMAIL PROTECTED] wrote:
 why i cannot give in_() a bindparam?

 q.filter( x.in_( somelistorset )) works
 q.filter( x.in_( bindparam('somename') )) fails
 ...
 File sqlalchemy/sql/expression.py, line 1368, in _in_impl
for o in seq_or_selectable:
 TypeError: '_BindParamClause' object is not iterable

 is this possible or not ?
 or sending a list/tuple/iterable as bindparam-value is not
 supported?
 is this a resend?   I answered a week ago (hm, GG didnt
 deliver ?  )
 didnt...
 should be x.in_([bindparam('somename')])
 but that is a list containing one param.
 i want the whole list to be a parameter.
 
 
 yeah does PG even support that ?   im assuming PG

as ANY(array[])


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aw: [sqlalchemy] Re: 0.4: can not append objects to instrumentedlists

2008-10-13 Thread jason kirtland

[EMAIL PROTECTED] wrote:
 
 I spent some time to migrate to sqlalchemy 0.4 and it's to late to go back to 
 0.3. What can I do to add objects to properties (InstumentedLists) in 
 sqlalchemy 0.4 (with different mappers)?

I suspect that case will work if you add the user to the session under 
the desired entity_name before appending to the collection.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reflection unavailable for mysql temporary tables?

2008-10-13 Thread jason kirtland

Andy Davidoff wrote:
 On Oct 11, 1:29 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 11, 2008, at 12:49 PM, AndyDavidoffwrote:
 This fixes the first part of this problem, but unfortunately the `show
 create table` is performed in the connection, not the session in which
 the temporary table was created.  MySQL doesn't expose temporary
 tables between sessions, so the `show create table` raises a MySQL
 exception due to a non-existent table.
 you can reflect any table on a specific connection using  
 autoload_with=someconnection.  if by Session you mean ORM session,  
 get the current connection using session.connection().
 
 Thanks, but MySQL's temporary tables are invisible to connection
 objects; the reflection would need to occur via queries issued in the
 actual Session (ORM session) in which the tables were created.  I
 doubt this'll be easy to elegantly hack into SQLA, though.

No hacking needed, it works just as Mike described.

from sqlalchemy import *
from sqlalchemy.orm import create_session

session = create_session()
session.bind = create_engine('mysql:///test')

session.begin()

session.execute('CREATE TEMPORARY TABLE foo (x INT)')
session.execute('INSERT INTO foo VALUES (1)')

m = MetaData()
tt = Table('foo', m, autoload=True, autoload_with=session.connection())
print session.execute(tt.select()).fetchall()

session.commit()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.5 rc1 - Mysql Unicode(1) decode error

2008-10-10 Thread jason kirtland

Martijn Moeling wrote:
 Hi
 
  
 
 I needed a Unicode(1) Column in one of my tables.
 
  
 
 It was translated into a char(1) column in MySQL.
 
  
 
 When querying the table, I get a:  AttributeError: 'Set' object has no 
 attribute 'decode'
 
  
 
 Which disappears if I make the column a Unicode(2), so there might be a 
 small bug in the MySQL code translating Unicode(1) to char(1) ….???

Try upgrading your MySQL-python library.  I've seen it do that on older 
versions.

-j

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: OrderingList and list.sort()

2008-10-10 Thread jason kirtland

Adam Dziendziel wrote:
 Hi,
 
 It seems that sorting of ordering list doesn't work. Attribute
 object.items is an OrderingList:
 
 object.items.sort(cmp=my_cmp)
 
 The list is sorted, but the ordering column is not updated. I need to
 call explicitly:
 
 object.items._reorder()
 
 Maybe override sort() in OrderingList to invoke self._reorder() after
 sorting?

Sure, makes sense.  reverse() too.  Interested in making a patch for the 
implementation in sqlalchemy/ext/ and add a new test?

-j

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql utf8 encoding problem

2008-10-09 Thread jason kirtland

Michael Bayer wrote:
 
 On Oct 8, 2008, at 8:54 PM, jason kirtland wrote:
 
 Most likely you just need to configure the db-api's client encoding by
 adding ?charset=utf8 onto your connection URL.

 Enough folks have hit this recently that I'm (again) considering  
 passing
 through the engine encoding= parameter to the MySQLdb connection  
 setup.
  I've resisted the urge for a while because we don't to my knowledge
 re-configure any db-apis in any of the backends.  But this keeps  
 coming
 up despite being documented in the mysql section of the docs, and last
 time I traced through it, it seemed like MySQLdb was ignoring the
 server's configured connection_encoding so a little assist from the SA
 side would probably be useful.

 I'll look at sneaking that into the upcoming rc2 unless the
 implementation is untenable for some reason or there's an outcry.
 
 
 since im a total dumdum, why have i never had this issue in my own  
 dealings with MySQL and Unicode ?   I use the Unicode type, i dont use  
 any charset= on my URL, and things work fine, including all of our  
 unit tests.  Is it actually storing the data incorrectly and we just  
 see the same info at the SQLA round trip level ?

i don't know about your env, but the unit tests under mysql use 
testing.engines.utf8_engine to configure a connection that can send 
unicode across the wire without encoding failures.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql utf8 encoding problem

2008-10-09 Thread jason kirtland

Michael Bayer wrote:
 
 On Oct 9, 2008, at 3:39 AM, jason kirtland wrote:
 
 i don't know about your env, but the unit tests under mysql use
 testing.engines.utf8_engine to configure a connection that can send
 unicode across the wire without encoding failures.
 
 
 I can run python test/sql/testtypes.py --db mysql --verbose  
 UnicodeTest.testbasic with a raise or pdb.set_trace() inside of  
 utf8_engine, and engines.utf8_engine is never called.

Might be something about the data being tested in that test method. 
It's needed in the tests where it's used.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql utf8 encoding problem

2008-10-08 Thread jason kirtland

joelanman wrote:
 Hi,
 
 Firstly - I'm hugely impressed with SQLAlchemy - it's really helped me
 a lot with my new project.
 
 I'm having problems storing international characters in mysql using
 SQLAlchemy. For example:
 
   école—school
 
 looks like this in mysql:
 
   école—school
 
 I'm using the following engine call:
 
   engine = create_engine(config.db, encoding='utf-8')
 
 and using Unicode as the column type:
 
   Column('content',   UnicodeText),
 
 and utf8 for the table:
 
   mysql_charset='utf8'
 
 I'm pretty sure all my mySQL options are set to utf8. This looks
 really similar to a 'double encoding' issue I found while searching
 the group, but it seems that was fixed in python-mysql 1.2.2, which is
 what I'm using.
 
 Any help would be much appreciated.

Most likely you just need to configure the db-api's client encoding by 
adding ?charset=utf8 onto your connection URL.

Enough folks have hit this recently that I'm (again) considering passing 
through the engine encoding= parameter to the MySQLdb connection setup. 
  I've resisted the urge for a while because we don't to my knowledge 
re-configure any db-apis in any of the backends.  But this keeps coming 
up despite being documented in the mysql section of the docs, and last 
time I traced through it, it seemed like MySQLdb was ignoring the 
server's configured connection_encoding so a little assist from the SA 
side would probably be useful.

I'll look at sneaking that into the upcoming rc2 unless the 
implementation is untenable for some reason or there's an outcry.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: adding children to ORM object using property causes problems, maybe a bug?

2008-10-02 Thread jason kirtland

Randy Syring wrote:
 After some work with Gedd on #sqlalchemy, it seems that adding
 children to a parent object using a custom property() doesn't work as
 we expected it would.  A test case is here:
 
 http://paste.pocoo.org/show/86848/
 
 The error is triggered by line #53.
 
 Are we doing something wrong or is this a bug in SA?

The only error I see in that test is:

   external_link.url may not be NULL u'INSERT INTO external_link
   (url) VALUES (?)' [None]

which is expected from the test setup.  (No .url is ever assigned.)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reflection unavailable for mysql temporary tables?

2008-09-27 Thread jason kirtland

Andy Davidoff wrote:
 Reflection of temporary tables under MySQL works around revision 4000
 and probably as recently as May but doesn't seem to work under
 revision 5000 or later; I get a NoSuchTableError.  Is this change
 intentional?
 
 If the only change I make to my code is to create and reflect a normal
 table versus a temporary table, then SQLA works as expected.
 Unfortunately, temporary tables offer me an easy concurrency crutch,
 so I deeply regret this missing functionality.
 
 I could dig deeper into when and where this stopped working, but
 perhaps I'm the one missing something.  What is it?  :-)

Give it a try with the trunk, r5129.

-j

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unicode Results from SQL functions

2008-09-25 Thread jason kirtland

Shawn Church wrote:
 
 
 On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
  
 
 Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a
 much easier way to get Unicode back from all DB access.
 
 
 Ok,  that works. I thought that create_engine(uri, encoding = latin1, 
 convert_unicode = True) would do this.  I am guessing from this that the 
 create_engine arguments are NOT being passed along to the dbapi connector?

No. I believe both of those are specifying the treatment of string data 
going _to_ the DB-API only, not bidirectional behavior.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Fwd: Support for ordered lists of child items

2008-09-24 Thread jason kirtland

Emmett Lazich wrote:
 Thank you Jason.  orderinglist looks like what I am after!
 
 Is your orderinglist plugin fully functional in 0.4.7p1?

Yep.

 Before I attempt it, pls advise if there any technical reason preventing 
 the integration of orderinglist into the basic_tree.py (adjacency list) 
 example?
 See 
 http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/adjacencytree/basic_tree.py

That example uses a dict-based collection, so it's not a drop in.  But I 
don't see any obstacle to using the orderinglist on adjacency lists in 
general.


 jason kirtland wrote:
 Yep, orderinglist handles that case.


 Michael Bayer wrote:
   
 forwarded from pvt email

 orderinglist ?

 Begin forwarded message:

 
 *From: *Emmett [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 *Date: *September 22, 2008 9:51:31 AM EDT
 *To: *Michael Bayer [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED]
 *Subject: **Re: Support for ordered lists of child items*

 Hello Michael, 18 months later, would your answer to Aaron still be
 the same?

 I have a problem fitting what Aaron described. ie. save+restore of
 child order after re-ordering in the Python side.  Re-ordering child
 list elements would obviously be ideal, but I could cope with updating
 an extra integer node attribute instead.

 I'm completely new to SA and at this stage skimming documentation and
 looking at the tree examples. Found this thread, so wondering if some
 newer SA magic can solve this, or if a custom collection class or
 something else is the best solution.

 I also looked at the ElementTree examples, but they don't appear to
 guarantee child order either - correct?

 Thanks in advance.  I looked at your activity in this group. Amazing!


 On Apr 10 2007, 4:02 am, Michael Bayer [EMAIL PROTECTED]
 wrote:
   
 we dont have the capability to automatically update ordering columns  
 when the elements of a list are moved around.  if you move the  
 elements around, you need to execute some step that will update the  
 index columns (or create a custom collection class that does this for  
 you).

 On Apr 9, 2007, at 12:42 PM, Aaron Digulla wrote:



 
 Hello,
   
 I'm looking for a feature but couldn't find it in the docs.
   
 I have atreelike structure where the user can specify theorderof
 thechildrenof a node. In DB lingo, I have a parentId and an index
 column. When I loadchildren, they should be ordered by the index.
 This seems to be supported.
   
 Can SA also update the index column when I movechildrenin the list
 around? Like:
   
 # ... parent has threechildrenA, B C
 item = parent.children[0]
 del parent.children[0]
 parent.children.insert (1, item)
 # now, parent has threechildrenB, A, C
   
 Regards,
   
 




   
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unicode Results from SQL functions

2008-09-24 Thread jason kirtland

Shawn Church wrote:
 
 
 On Wed, Sep 24, 2008 at 7:37 PM, Michael Bayer [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 we can of course add more functions to the list of known functions
 such as ifnull() (it would be best if ifnull() is a SQL standard
 function, I'm not sure if it is).
 
 
 Not sure this will work for IFNULL since it's type depends upon the 
 runtime arguments.  I missed the func type_ argument when I read the 
 documentation.  That is a good solution for the general case of 
 specifiying the type when it cannot be determined from the function or 
 the function arguments.  In fact I'm going to use it any time the type 
 is not obvious.
 
 For what it is worth the following patch modifies ResultProxy to convert 
 strings to unicode if convert_unicode == True.  It 'fixes' my example 
 and test/testall.py still passes.

Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a 
much easier way to get Unicode back from all DB access.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   3   >