Re: [sqlalchemy] MX Linux Python3 sqlalchemy hangs on ROLLBACK

2021-05-12 Thread Rob Marshall
Hi Mike,

I'll see if I can separate the specific DataFrame that was causing the 
problem.

Thanks,

Rob

On Tuesday, May 11, 2021 at 7:42:28 PM UTC-4 Mike Bayer wrote:

> this does mean there's a bug in the mariadb driver, if you can provide a 
> reproducing test case
>
>
> On Tue, May 11, 2021, at 12:41 PM, Rob Marshall wrote:
>
> Hi Mike,
>
> Apparently it was the driver. I changed the create_engine to:
>
> engine = create_engine('mysql://user:pass...@127.0.0.1/options 
> <http://user:password@127.0.0.1/options>')
>
> And that appears to handle that particular insert without issue.
>
> Thank-you,
>
> Rob
>
> On Tuesday, May 11, 2021 at 10:41:13 AM UTC-4 Mike Bayer wrote:
>
>
> two things to try:
>
> 1. try a different driver, like mysqlclient or pymysql, to see if error 
> persists
>
>
> 2. when it hangs, look in information_schema.processlist for current info 
> (it's the same in mariadb):
>
>
> https://dev.mysql.com/doc/refman/8.0/en/information-schema-processlist-table.html
>
>
>
> On Tue, May 11, 2021, at 10:29 AM, Rob Marshall wrote:
>
> Hi,
>
> I'm updating a MySQL (MariaDB) database using Pandas and sqlalchemy. I am 
> seeing it hang on a ROLLBACK but I'm not sure how to determine what is 
> causing the ROLLBACK. Below is a session with DEBUG enabled. Any help would 
> be appreciated.
>
> The engine is created with:
>
>  engine = create_engine('mariadb+mariadbconnector://
> user:pass...@127.0.0.1/options <http://user:password@127.0.0.1/options>')
>
> The original data collected is returned as a Python dictionary, and I 
> convert each of the rows of calls/puts by expiration date to a Pandas data 
> frame and use to_sql() to update the database.
>
> Thank-you,
>
> Rob
>
> ---
>
> 20210511.10:10:22 stock_option_updates.py:101 [DEBUG] get_db_stocks: 
> Entered
> DEBUG:Stock/Option Updates:get_db_stocks: Entered
> INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'sql_mode'
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
> DEBUG:sqlalchemy.engine.Engine:Row ('sql_mode', 
> 'STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION')
> INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'lower_case_table_names'
> INFO:sqlalchemy.engine.Engine:[generated in 0.00016s] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
> DEBUG:sqlalchemy.engine.Engine:Row ('lower_case_table_names', '0')
> INFO:sqlalchemy.engine.Engine:SELECT DATABASE()
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('DATABASE()',)
> DEBUG:sqlalchemy.engine.Engine:Row ('options',)
> INFO:sqlalchemy.engine.Engine:SHOW DATABASES
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Database',)
> DEBUG:sqlalchemy.engine.Engine:Row ('information_schema',)
> DEBUG:sqlalchemy.engine.Engine:Row ('mysql',)
> DEBUG:sqlalchemy.engine.Engine:Row ('options',)
> DEBUG:sqlalchemy.engine.Engine:Row ('performance_schema',)
> DEBUG:sqlalchemy.engine.Engine:Row ('stocks',)
> INFO:sqlalchemy.engine.Engine:DROP DATABASE options
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> INFO:sqlalchemy.engine.Engine:COMMIT
> INFO:sqlalchemy.engine.Engine:CREATE DATABASE options
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> INFO:sqlalchemy.engine.Engine:COMMIT
> Expiration dates for AAPL: [1620345600, 1620950400, 1621555200, 
> 162216, 1622764800, 1623369600, 1623974400, 1626393600, 1629417600, 
> 1631836800, 1634256000, 1642723200, 1655424000, 1663286400, 1674172800, 
> 1679011200, 1686873600]
> For ticker AAPL and expires on 2021-05-07 00:00:00: calls:   60 puts:   59
> Starting calls (0) for AAPL number: 60
> Converted to dataframe
> INFO:sqlalchemy.engine.Engine:SELECT * FROM information_schema.tables 
> WHERE table_schema = ? AND table_name = ?
> INFO:sqlalchemy.engine.Engine:[generated in 0.00027s] ('options', 'AAPL')
> DEBUG:sqlalchemy.engine.Engine:Col ('TABLE_CATALOG', 'TABLE_SCHEMA', 
> 'TABLE_NAME', 'TABLE_TYPE', 'ENGINE', 'VERSION', 'ROW_FORMAT', 
> 'TABLE_ROWS', 'AVG_ROW_LENGTH', 'DATA_LENGTH', 'MAX_DATA_LENGTH', 
> 'INDEX_LENGTH', 'DATA_FREE', 'AUTO_INCREMENT', 'CREATE_TIME', 
> 'UPDATE_TIME', 'CHECK_TIME', 'TABLE_COLLATION', 'CHECKSUM', 
> 'CREATE_OPTIONS', 'TABLE_COMMENT', 'MAX_INDEX_LENGTH', 'TEMPORARY')
> INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
> INFO:sqlalchemy.engine.Engine:
> CREATE TABLE `AAPL` (
> `index` BIGINT,
> `contractSymbol` TEXT,
> `lastTradeDate` BIGINT,
> strike FLOAT(53),
> `lastPrice` FLOAT(53),
> bid FLOAT(53),
> ask FLOAT(53),
> `change` FLOAT(53),
> 

Re: [sqlalchemy] MX Linux Python3 sqlalchemy hangs on ROLLBACK

2021-05-11 Thread Rob Marshall
Hi Mike,

Apparently it was the driver. I changed the create_engine to:

engine = create_engine('mysql://user:password@127.0.0.1/options')

And that appears to handle that particular insert without issue.

Thank-you,

Rob
On Tuesday, May 11, 2021 at 10:41:13 AM UTC-4 Mike Bayer wrote:

> two things to try:
>
> 1. try a different driver, like mysqlclient or pymysql, to see if error 
> persists
>
>
> 2. when it hangs, look in information_schema.processlist for current info 
> (it's the same in mariadb):
>
>
> https://dev.mysql.com/doc/refman/8.0/en/information-schema-processlist-table.html
>
>
>
> On Tue, May 11, 2021, at 10:29 AM, Rob Marshall wrote:
>
> Hi,
>
> I'm updating a MySQL (MariaDB) database using Pandas and sqlalchemy. I am 
> seeing it hang on a ROLLBACK but I'm not sure how to determine what is 
> causing the ROLLBACK. Below is a session with DEBUG enabled. Any help would 
> be appreciated.
>
> The engine is created with:
>
>  engine = create_engine('mariadb+mariadbconnector://
> user:pass...@127.0.0.1/options <http://user:password@127.0.0.1/options>')
>
> The original data collected is returned as a Python dictionary, and I 
> convert each of the rows of calls/puts by expiration date to a Pandas data 
> frame and use to_sql() to update the database.
>
> Thank-you,
>
> Rob
>
> ---
>
> 20210511.10:10:22 stock_option_updates.py:101 [DEBUG] get_db_stocks: 
> Entered
> DEBUG:Stock/Option Updates:get_db_stocks: Entered
> INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'sql_mode'
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
> DEBUG:sqlalchemy.engine.Engine:Row ('sql_mode', 
> 'STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION')
> INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'lower_case_table_names'
> INFO:sqlalchemy.engine.Engine:[generated in 0.00016s] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
> DEBUG:sqlalchemy.engine.Engine:Row ('lower_case_table_names', '0')
> INFO:sqlalchemy.engine.Engine:SELECT DATABASE()
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('DATABASE()',)
> DEBUG:sqlalchemy.engine.Engine:Row ('options',)
> INFO:sqlalchemy.engine.Engine:SHOW DATABASES
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> DEBUG:sqlalchemy.engine.Engine:Col ('Database',)
> DEBUG:sqlalchemy.engine.Engine:Row ('information_schema',)
> DEBUG:sqlalchemy.engine.Engine:Row ('mysql',)
> DEBUG:sqlalchemy.engine.Engine:Row ('options',)
> DEBUG:sqlalchemy.engine.Engine:Row ('performance_schema',)
> DEBUG:sqlalchemy.engine.Engine:Row ('stocks',)
> INFO:sqlalchemy.engine.Engine:DROP DATABASE options
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> INFO:sqlalchemy.engine.Engine:COMMIT
> INFO:sqlalchemy.engine.Engine:CREATE DATABASE options
> INFO:sqlalchemy.engine.Engine:[raw sql] ()
> INFO:sqlalchemy.engine.Engine:COMMIT
> Expiration dates for AAPL: [1620345600, 1620950400, 1621555200, 
> 162216, 1622764800, 1623369600, 1623974400, 1626393600, 1629417600, 
> 1631836800, 1634256000, 1642723200, 1655424000, 1663286400, 1674172800, 
> 1679011200, 1686873600]
> For ticker AAPL and expires on 2021-05-07 00:00:00: calls:   60 puts:   59
> Starting calls (0) for AAPL number: 60
> Converted to dataframe
> INFO:sqlalchemy.engine.Engine:SELECT * FROM information_schema.tables 
> WHERE table_schema = ? AND table_name = ?
> INFO:sqlalchemy.engine.Engine:[generated in 0.00027s] ('options', 'AAPL')
> DEBUG:sqlalchemy.engine.Engine:Col ('TABLE_CATALOG', 'TABLE_SCHEMA', 
> 'TABLE_NAME', 'TABLE_TYPE', 'ENGINE', 'VERSION', 'ROW_FORMAT', 
> 'TABLE_ROWS', 'AVG_ROW_LENGTH', 'DATA_LENGTH', 'MAX_DATA_LENGTH', 
> 'INDEX_LENGTH', 'DATA_FREE', 'AUTO_INCREMENT', 'CREATE_TIME', 
> 'UPDATE_TIME', 'CHECK_TIME', 'TABLE_COLLATION', 'CHECKSUM', 
> 'CREATE_OPTIONS', 'TABLE_COMMENT', 'MAX_INDEX_LENGTH', 'TEMPORARY')
> INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
> INFO:sqlalchemy.engine.Engine:
> CREATE TABLE `AAPL` (
> `index` BIGINT,
> `contractSymbol` TEXT,
> `lastTradeDate` BIGINT,
> strike FLOAT(53),
> `lastPrice` FLOAT(53),
> bid FLOAT(53),
> ask FLOAT(53),
> `change` FLOAT(53),
> `percentChange` FLOAT(53),
> volume BIGINT,
> `openInterest` BIGINT,
> `impliedVolatility` FLOAT(53),
> `inTheMoney` BOOL,
> `contractSize` TEXT,
> currency TEXT,
> `Type` TEXT,
> `Date` DATETIME,
> `Expiration` DATETIME
> )
>
>
> INFO:sqlalchemy.engine.Engine:[no key 0.00026s] ()
> INFO:sqlalchemy.engine.Engine:CREATE INDEX `ix_AAPL_index` ON `AAPL` 
> (`index`)
> INFO:sqlalche

[sqlalchemy] MX Linux Python3 sqlalchemy hangs on ROLLBACK

2021-05-11 Thread Rob Marshall
Hi,

I'm updating a MySQL (MariaDB) database using Pandas and sqlalchemy. I am 
seeing it hang on a ROLLBACK but I'm not sure how to determine what is 
causing the ROLLBACK. Below is a session with DEBUG enabled. Any help would 
be appreciated.

The engine is created with:

 engine = 
create_engine('mariadb+mariadbconnector://user:password@127.0.0.1/options')

The original data collected is returned as a Python dictionary, and I 
convert each of the rows of calls/puts by expiration date to a Pandas data 
frame and use to_sql() to update the database.

Thank-you,

Rob

---

20210511.10:10:22 stock_option_updates.py:101 [DEBUG] get_db_stocks: Entered
DEBUG:Stock/Option Updates:get_db_stocks: Entered
INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'sql_mode'
INFO:sqlalchemy.engine.Engine:[raw sql] ()
DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
DEBUG:sqlalchemy.engine.Engine:Row ('sql_mode', 
'STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION')
INFO:sqlalchemy.engine.Engine:SHOW VARIABLES LIKE 'lower_case_table_names'
INFO:sqlalchemy.engine.Engine:[generated in 0.00016s] ()
DEBUG:sqlalchemy.engine.Engine:Col ('Variable_name', 'Value')
DEBUG:sqlalchemy.engine.Engine:Row ('lower_case_table_names', '0')
INFO:sqlalchemy.engine.Engine:SELECT DATABASE()
INFO:sqlalchemy.engine.Engine:[raw sql] ()
DEBUG:sqlalchemy.engine.Engine:Col ('DATABASE()',)
DEBUG:sqlalchemy.engine.Engine:Row ('options',)
INFO:sqlalchemy.engine.Engine:SHOW DATABASES
INFO:sqlalchemy.engine.Engine:[raw sql] ()
DEBUG:sqlalchemy.engine.Engine:Col ('Database',)
DEBUG:sqlalchemy.engine.Engine:Row ('information_schema',)
DEBUG:sqlalchemy.engine.Engine:Row ('mysql',)
DEBUG:sqlalchemy.engine.Engine:Row ('options',)
DEBUG:sqlalchemy.engine.Engine:Row ('performance_schema',)
DEBUG:sqlalchemy.engine.Engine:Row ('stocks',)
INFO:sqlalchemy.engine.Engine:DROP DATABASE options
INFO:sqlalchemy.engine.Engine:[raw sql] ()
INFO:sqlalchemy.engine.Engine:COMMIT
INFO:sqlalchemy.engine.Engine:CREATE DATABASE options
INFO:sqlalchemy.engine.Engine:[raw sql] ()
INFO:sqlalchemy.engine.Engine:COMMIT
Expiration dates for AAPL: [1620345600, 1620950400, 1621555200, 162216, 
1622764800, 1623369600, 1623974400, 1626393600, 1629417600, 1631836800, 
1634256000, 1642723200, 1655424000, 1663286400, 1674172800, 1679011200, 
1686873600]
For ticker AAPL and expires on 2021-05-07 00:00:00: calls:   60 puts:   59
Starting calls (0) for AAPL number: 60
Converted to dataframe
INFO:sqlalchemy.engine.Engine:SELECT * FROM information_schema.tables WHERE 
table_schema = ? AND table_name = ?
INFO:sqlalchemy.engine.Engine:[generated in 0.00027s] ('options', 'AAPL')
DEBUG:sqlalchemy.engine.Engine:Col ('TABLE_CATALOG', 'TABLE_SCHEMA', 
'TABLE_NAME', 'TABLE_TYPE', 'ENGINE', 'VERSION', 'ROW_FORMAT', 
'TABLE_ROWS', 'AVG_ROW_LENGTH', 'DATA_LENGTH', 'MAX_DATA_LENGTH', 
'INDEX_LENGTH', 'DATA_FREE', 'AUTO_INCREMENT', 'CREATE_TIME', 
'UPDATE_TIME', 'CHECK_TIME', 'TABLE_COLLATION', 'CHECKSUM', 
'CREATE_OPTIONS', 'TABLE_COMMENT', 'MAX_INDEX_LENGTH', 'TEMPORARY')
INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.Engine:
CREATE TABLE `AAPL` (
`index` BIGINT,
`contractSymbol` TEXT,
`lastTradeDate` BIGINT,
strike FLOAT(53),
`lastPrice` FLOAT(53),
bid FLOAT(53),
ask FLOAT(53),
`change` FLOAT(53),
`percentChange` FLOAT(53),
volume BIGINT,
`openInterest` BIGINT,
`impliedVolatility` FLOAT(53),
`inTheMoney` BOOL,
`contractSize` TEXT,
currency TEXT,
`Type` TEXT,
`Date` DATETIME,
`Expiration` DATETIME
)


INFO:sqlalchemy.engine.Engine:[no key 0.00026s] ()
INFO:sqlalchemy.engine.Engine:CREATE INDEX `ix_AAPL_index` ON `AAPL` 
(`index`)
INFO:sqlalchemy.engine.Engine:[no key 0.00018s] ()
INFO:sqlalchemy.engine.Engine:COMMIT
INFO:sqlalchemy.engine.Engine:BEGIN (implicit)
INFO:sqlalchemy.engine.Engine:INSERT INTO `AAPL` (`index`, 
`contractSymbol`, `lastTradeDate`, strike, `lastPrice`, bid, ask, `change`, 
`percentChange`, volume, `openInterest`, `impliedVolatility`, `inTheMoney`, 
`contractSize`, currency, `Type`, `Date`, `Expiration`) VALUES (?, ?, ?, ?, 
?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
INFO:sqlalchemy.engine.Engine:[generated in 0.00113s] ((0, 
'AAPL210507C00065000', 1619713738, 65.0, 62.45, 62.55, 63.85, -5.29, 
-7.822877, 1, 1, 3.917968955078125, 1, 'REGULAR', 'USD', 'calls', 
datetime.datetime(2021, 5, 4, 0, 0), datetime.datetime(2021, 5, 7, 0, 0)), 
(1, 'AAPL210507C0008', 1619790996, 80.0, 52.55, 47.55, 49.0, 0.0, 0.0, 
1, 29, 2.9414088964843748, 1, 'REGULAR', 'USD', 'calls', 
datetime.datetime(2021, 5, 4, 0, 0), datetime.datetime(2021, 5, 7, 0, 0)), 
(2, 'AAPL210507C00085000', 1619715379, 85.0, 48.13, 41.9, 43.95, 0.0, 0.0, 
1, 4, 1.984375078125, 1, 'REGULAR', 'USD', 'calls', datetime.datetime(2021, 
5, 4, 0, 0), datetime.datetime(2021, 5, 7, 0, 0)), (3, 
'AAPL210507C0009', 1619812194, 90.0, 37.6, 37.85, 38.1, -3.8500023, 
-9.288304

[sqlalchemy] How to refer to columns whose names begin with a number when autoloading?

2021-04-11 Thread Rob Rosenfeld
Hi All,

I'm using SQLAlchemy to access a legacy MSSQL database.   I'm using the
autoload feature to load the schema from the database.

In this example I'd like to read data out of the column named "1st_period"
in the database.   The following query shows the SQL I'd need.  But trying
to access a property named "1st_period" yields a SyntaxError

Thanks,
Rob

SELECT TOP 10 [1st_period] FROM Students;

class Student(Model):
__table__ = Table("Students", metadata, autoload=True, autoload_with=engine)

@property
def first_period(self):
return self.1st_period

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHUdipkSOfZhBGfXpiOu5nV1XKtbw8ML8%3DSQ40EbXO97oyoR2w%40mail.gmail.com.


Re: [sqlalchemy] Many-to-many using custom join columns and autoloaded tables

2020-07-22 Thread Rob Rosenfeld
Thanks.  Sorry I missed that.  I latched onto "*Joins to a Target Entity or
Selectable*"

On Tue, Jul 21, 2020 at 11:15 PM Mike Bayer 
wrote:

> to join on the relationships you have to name them:
>
>
> Class.query.join(Class.enrollments).join(Enrollment.students).join(Student.details)
>
> the different ways to join are laid out at
>
>
> https://docs.sqlalchemy.org/en/13/orm/query.html?highlight=query%20join#sqlalchemy.orm.query.Query.join
>
>
>
> On Tue, Jul 21, 2020, at 9:40 PM, Rob Rosenfeld wrote:
>
>
> Hi All,
>
> I'm struggling to put several concepts together.   I am working with a
> legacy SQL Server database (2008 R2 and 2019) that was not setup with much
> planning, let alone ORM in mind.   I would like to set up a many to many
> relationship that joins through columns I specify and all the tables are
> autoloaded from the database schema.  I've been doing fine until I hit this
> association object pattern.  Can  you please offer any advice?
>
> Thanks,
> Rob
>
> In the end my goal is to execute the following and have it issue a single
> statement to the database.
>
> results = 
> Class.query.join(Enrollment).join(Student).join(StudentDetails).all()
>
>
>
>
> But even this call fails and gets the error below.  I assume I've
> incorrectly defined the relationships if it doesn't know how to do the
> join.
>
> results = Class.query.join(Enrollment).all()
>
>
> sqlalchemy.exc.InvalidRequestError: Don't know how to join to  __main__.Enrollment'>. Please use the .select_from() method to establish
> an explicit left side, as well as providing an explcit ON clause if not
> present already to help resolve the ambiguity.
>
>
> Here's a trimmed example illustrating what I'm trying
>
> class Class(Model):
> __table__ = Table(*"classes"*, metadata, autoload=True, 
> autoload_with=engine)
>
>
> class Student(Model):
> __table__ = Table(*"Students"*, metadata, autoload=True, 
> autoload_with=engine)
>
>
> class Enrollment(Model):
> __table__ = Table(*"Enrollment"*, metadata, autoload=True, 
> autoload_with=engine)
>
> class_join_string = (
> *"and_(remote(Class.ClassID)==foreign(Enrollment.class),"**   
>   "remote(Class.TimeSlot)==foreign(Enrollment.time_slot))"*)
> class_ = relationship(*"Class"*, primaryjoin=class_join_string, 
> backref=*"enrollments"*)
>
> student_join_string =
> *"remote(Student.StudentId)==foreign(Enrollment.studentID)"*student = 
> relationship(*"Student"*, primaryjoin=student_join_string, 
> backref=*"classes"*)
>
>
> class StudentDetails(Model):
>
> *# 1 to 1 with Student*__table__ = Table(*"StudentDetails"*, metadata, 
> autoload=True, autoload_with=engine)
> student_join_string =
> *"foreign(StudentDetails.student)==remote(Student.Student_id)"*student = 
> Db.db().relationship(*'Student'*, primaryjoin=student_join_string,
>backref=Db.db().backref(*'details'*, 
> uselist=False))
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/sqlalchemy/CAHUdip%3DZLwO-KE-3ifiCHNjZ0v7EbqQpA1d7BGhqao9bGLtnCw%40mail.gmail.com
> <https://groups.google.com/d/msgid/sqlalchemy/CAHUdip%3DZLwO-KE-3ifiCHNjZ0v7EbqQpA1d7BGhqao9bGLtnCw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/jlMX6Z2q740/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/s

[sqlalchemy] Many-to-many using custom join columns and autoloaded tables

2020-07-21 Thread Rob Rosenfeld
Hi All,

I'm struggling to put several concepts together.   I am working with a
legacy SQL Server database (2008 R2 and 2019) that was not setup with much
planning, let alone ORM in mind.   I would like to set up a many to many
relationship that joins through columns I specify and all the tables are
autoloaded from the database schema.  I've been doing fine until I hit this
association object pattern.  Can  you please offer any advice?

Thanks,
Rob

In the end my goal is to execute the following and have it issue a single
statement to the database.

results = 
Class.query.join(Enrollment).join(Student).join(StudentDetails).all()


But even this call fails and gets the error below.  I assume I've
incorrectly defined the relationships if it doesn't know how to do the
join.

results = Class.query.join(Enrollment).all()

sqlalchemy.exc.InvalidRequestError: Don't know how to join to . Please use the .select_from() method to establish an
explicit left side, as well as providing an explcit ON clause if not
present already to help resolve the ambiguity.


Here's a trimmed example illustrating what I'm trying

class Class(Model):
__table__ = Table("classes", metadata, autoload=True, autoload_with=engine)


class Student(Model):
__table__ = Table("Students", metadata, autoload=True, autoload_with=engine)


class Enrollment(Model):
__table__ = Table("Enrollment", metadata, autoload=True,
autoload_with=engine)

class_join_string =
("and_(remote(Class.ClassID)==foreign(Enrollment.class),"

"remote(Class.TimeSlot)==foreign(Enrollment.time_slot))")
class_ = relationship("Class", primaryjoin=class_join_string,
backref="enrollments")

student_join_string =
"remote(Student.StudentId)==foreign(Enrollment.studentID)"
student = relationship("Student", primaryjoin=student_join_string,
backref="classes")


class StudentDetails(Model):
# 1 to 1 with Student
__table__ = Table("StudentDetails", metadata, autoload=True,
autoload_with=engine)
student_join_string =
"foreign(StudentDetails.student)==remote(Student.Student_id)"
student = Db.db().relationship('Student', primaryjoin=student_join_string,
   backref=Db.db().backref('details',
uselist=False))

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/CAHUdip%3DZLwO-KE-3ifiCHNjZ0v7EbqQpA1d7BGhqao9bGLtnCw%40mail.gmail.com.


[sqlalchemy] alembic (1.4.2) revision --autogenerate hang

2020-06-18 Thread Rob Schottland
I'm trying to do  "alembic revision --autogenerate"

I get the usual warnings (that I've gotten in previous successful runs)

Here's the console output

alembic$ alembic revision --autogenerate

INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
/home/rschottland/.local/share/virtualenvs/astorb_db-u7fxujye/lib/python3.7/site-packages/sqlalchemy/dialects/postgresql/base.py:3010:
 
SAWarning: Did not recognize type 'point' of column 'point_ra_dec'
  "Did not recognize type '%s' of column '%s'" % (attype, name)
INFO  [alembic.ddl.postgresql] Detected sequence named 'units_id_seq' as 
owned by integer column 'units(id)', assuming SERIAL and omitting
/home/rschottland/.local/share/virtualenvs/astorb_db-u7fxujye/lib/python3.7/site-packages/sqlalchemy/dialects/postgresql/base.py:3331:
 
SAWarning: Predicate of partial index ix_nea_elements ignored during 
reflection
  % idx_name
INFO  [alembic.ddl.postgresql] Detected sequence named 
'editorial_notices_id_seq' as owned by integer column 
'editorial_notices(id)', assuming SERIAL and omitting
INFO  [alembic.ddl.postgresql] Detected sequence named 
'taxonomy_sys_type_id_seq' as owned by integer column 
'taxonomy_sys_type(id)', assuming SERIAL and omitting

After this, alembic appears to be hung...

Any thoughts on what could be the cause? How might I debug this situation?

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a81209e0-5adf-4616-9ae8-7459ed73ab6do%40googlegroups.com.


Re: [sqlalchemy] Retaining plain old python instance member values betwen Flask-SQLAlchemy requests.

2020-02-22 Thread Rob Rosenfeld
For now I have an unoptimized solution working.   I don't ever keep a 
reference to the SQLA instance and in the reconstructor I reattach the 
ephemeral state.  Something like.

class User(db.Model):
  __data_cache = dict()

  @orm.reconstructor
  def init_on_load(self):
data = User.__data_cache.get(self.id)
if data is None:
  User.__data_cache[self.id] = 0
User.__data_cache[self.id] += 1

Rob


On Wednesday, February 12, 2020 at 9:43:37 PM UTC-6, Rob Rosenfeld wrote:
>
> Thanks for the thoughts.  I'm going to digest, read docs, and experiment. 
>
>
> On Wed, Feb 12, 2020 at 8:52 AM Mike Bayer  > wrote:
>
>>
>>
>> On Tue, Feb 11, 2020, at 11:41 PM, Rob Rosenfeld wrote:
>>
>> Hi All,
>>
>> I am using Flask-SQLAlchemy on a legacy database.  My SQLA classes / 
>> tables are setup using declarative base and autoload.  In addition to using 
>> the class to access persisted data, I also use plain old Python to enrich 
>> them with ephemeral data. 
>>
>> Every time I load a User object, I'd like the instance for that 
>> particular object / row to retain its plain old python instance data.
>>
>> My understanding is that Flask-SQLAlchemy will by default use a separate 
>> scoped_session for each request.  And that means at the end of the request 
>> session and user will go out of scope.  So in the next request to 
>> /users/Bob will instantiate a new User object whose will be reinitialized . 
>> . . each time I make this request I will be told it was "asked 1 time". 
>>
>> Alternately, if I call /users_alt/Bob, then second time I call a 
>> DetachedInstanceError will be raised when I attempt to read user.owners.
>>
>> I suspect there's a pattern used to accomplish what I want to do, but I 
>> have not tried the right Google search.  Any recommendations on how to 
>> approach this or searches to try / doc sections to read?   The state I'm 
>> storing in times_name_read_since_launch is actually complex enough that I 
>> don't want to persist it and have to keep that dependent state synchronized.
>>
>> Alternately is there a way for me to reattach a SQLA object to the 
>> current requests' session?  Then, ideally, I could get updates from the 
>> database to User or Owner  objects, but retain my ephemeral state in the 
>> plain old python.  
>>
>>
>> you can get a plain reattach without it emitting any SQL using 
>> session.add():
>>
>>
>> session.add(detached_object)
>>
>>
>> however, if you're not sure if detached_object's primary key might have 
>> been loaded already, you would want to use session.merge() instead.  if 
>> your object has no changes on it you can add load=False which will avoid a 
>> SQL round trip.
>>
>>
>>
>>
>> Thanks for the help,
>> Rob
>>
>> from flask import Flask
>> from flask_sqlalchemy import SQLAlchemy
>> from sqlalchemy import MetaData
>> from sqlalchemy import orm
>>
>> app = Flask(__name__)
>> app.config['SQLALCHEMY_DATABASE_URI'] = connection_string
>> db = SQLAlchemy(app)
>>
>>
>> class Owner(db.Model):
>> __table__ = db.Table('Owners', MetaData(), autoload=True, 
>> autoload_with=db.engine)
>>
>>
>> class User(db.Model):
>> __table__ = db.Table('users', MetaData(), autoload=True, 
>> autoload_with=db.engine)
>> owners = db.relationship('Owner', primaryjoin=
>> "foreign(User.userId)==remote(Owner.userId)", uselist=True,
>>  backref=db.backref('user', uselist=False))
>>
>> users_seen = dict()
>>
>> @orm.reconstructor
>> def init_on_load(self):
>> self.times_name_read_since_launch = 0
>>
>> @classmethod
>> def lookup_by_name(cls, name):
>> user_ = User.users_seen.get(name)
>> if user_ is None:
>> user_ = User.query.filter(User.UserName == name).one()
>> return user_
>>
>> @classmethod
>> def store_by_name(cls, user_):
>> if user_.UserName not in User.users_seen:
>> User.users_seen[user_.UserName] = user_
>>
>> @property
>> def name(self):
>> self.times_name_read_since_launch += 1
>> return self.UserName
>>
>>
>> @app.route("/users/")
>> def user(name):
>> user_ = User.query.filter(User.UserName == name).one()
>> return "{} with {} - asked {} times".format(user_.name, user_.owners, 
>> user_.times_name_read_since_launch)
>>

Re: [sqlalchemy] Retaining plain old python instance member values betwen Flask-SQLAlchemy requests.

2020-02-12 Thread Rob Rosenfeld
Thanks for the thoughts.  I'm going to digest, read docs, and experiment.


On Wed, Feb 12, 2020 at 8:52 AM Mike Bayer  wrote:

>
>
> On Tue, Feb 11, 2020, at 11:41 PM, Rob Rosenfeld wrote:
>
> Hi All,
>
> I am using Flask-SQLAlchemy on a legacy database.  My SQLA classes /
> tables are setup using declarative base and autoload.  In addition to using
> the class to access persisted data, I also use plain old Python to enrich
> them with ephemeral data.
>
> Every time I load a User object, I'd like the instance for that particular
> object / row to retain its plain old python instance data.
>
> My understanding is that Flask-SQLAlchemy will by default use a separate
> scoped_session for each request.  And that means at the end of the request
> session and user will go out of scope.  So in the next request to
> /users/Bob will instantiate a new User object whose will be reinitialized .
> . . each time I make this request I will be told it was "asked 1 time".
>
> Alternately, if I call /users_alt/Bob, then second time I call a
> DetachedInstanceError will be raised when I attempt to read user.owners.
>
> I suspect there's a pattern used to accomplish what I want to do, but I
> have not tried the right Google search.  Any recommendations on how to
> approach this or searches to try / doc sections to read?   The state I'm
> storing in times_name_read_since_launch is actually complex enough that I
> don't want to persist it and have to keep that dependent state synchronized.
>
> Alternately is there a way for me to reattach a SQLA object to the current
> requests' session?  Then, ideally, I could get updates from the database to
> User or Owner  objects, but retain my ephemeral state in the plain old
> python.
>
>
> you can get a plain reattach without it emitting any SQL using
> session.add():
>
>
> session.add(detached_object)
>
>
> however, if you're not sure if detached_object's primary key might have
> been loaded already, you would want to use session.merge() instead.  if
> your object has no changes on it you can add load=False which will avoid a
> SQL round trip.
>
>
>
>
> Thanks for the help,
> Rob
>
> from flask import Flask
> from flask_sqlalchemy import SQLAlchemy
> from sqlalchemy import MetaData
> from sqlalchemy import orm
>
> app = Flask(__name__)
> app.config['SQLALCHEMY_DATABASE_URI'] = connection_string
> db = SQLAlchemy(app)
>
>
> class Owner(db.Model):
> __table__ = db.Table('Owners', MetaData(), autoload=True,
> autoload_with=db.engine)
>
>
> class User(db.Model):
> __table__ = db.Table('users', MetaData(), autoload=True, autoload_with
> =db.engine)
> owners = db.relationship('Owner', primaryjoin=
> "foreign(User.userId)==remote(Owner.userId)", uselist=True,
>  backref=db.backref('user', uselist=False))
>
> users_seen = dict()
>
> @orm.reconstructor
> def init_on_load(self):
> self.times_name_read_since_launch = 0
>
> @classmethod
> def lookup_by_name(cls, name):
> user_ = User.users_seen.get(name)
> if user_ is None:
> user_ = User.query.filter(User.UserName == name).one()
> return user_
>
> @classmethod
> def store_by_name(cls, user_):
> if user_.UserName not in User.users_seen:
> User.users_seen[user_.UserName] = user_
>
> @property
> def name(self):
> self.times_name_read_since_launch += 1
> return self.UserName
>
>
> @app.route("/users/")
> def user(name):
> user_ = User.query.filter(User.UserName == name).one()
> return "{} with {} - asked {} times".format(user_.name, user_.owners,
> user_.times_name_read_since_launch)
>
>
> @app.route("/users_alt/")
> def user_alt(name):
> user_ = User.lookup_by_name(name)
> User.store_by_name(user_)
> owners = None
> if user_.times_name_read_since_launch > 0:
> # don't lazy load addresses the first request, simulates more
> complex actual behavior desired
> owners = user_.owners
> return "{} with {} - asked {} times".format(user_.name, owners, user_.
> times_name_read_since_launch)
>
>
> db.create_all()
>
>
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> T

[sqlalchemy] Retaining plain old python instance member values betwen Flask-SQLAlchemy requests.

2020-02-11 Thread Rob Rosenfeld
Hi All,

I am using Flask-SQLAlchemy on a legacy database.  My SQLA classes / tables 
are setup using declarative base and autoload.  In addition to using the 
class to access persisted data, I also use plain old Python to enrich them 
with ephemeral data. 

Every time I load a User object, I'd like the instance for that particular 
object / row to retain its plain old python instance data.

My understanding is that Flask-SQLAlchemy will by default use a separate 
scoped_session for each request.  And that means at the end of the request 
session and user will go out of scope.  So in the next request to 
/users/Bob will instantiate a new User object whose will be reinitialized . 
. . each time I make this request I will be told it was "asked 1 time". 

Alternately, if I call /users_alt/Bob, then second time I call a 
DetachedInstanceError will be raised when I attempt to read user.owners.

I suspect there's a pattern used to accomplish what I want to do, but I 
have not tried the right Google search.  Any recommendations on how to 
approach this or searches to try / doc sections to read?   The state I'm 
storing in times_name_read_since_launch is actually complex enough that I 
don't want to persist it and have to keep that dependent state synchronized.

Alternately is there a way for me to reattach a SQLA object to the current 
requests' session?  Then, ideally, I could get updates from the database to 
User or Owner  objects, but retain my ephemeral state in the plain old 
python.  

Thanks for the help,
Rob

from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import MetaData
from sqlalchemy import orm

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = connection_string
db = SQLAlchemy(app)


class Owner(db.Model):
__table__ = db.Table('Owners', MetaData(), autoload=True, autoload_with=
db.engine)


class User(db.Model):
__table__ = db.Table('users', MetaData(), autoload=True, autoload_with=
db.engine)
owners = db.relationship('Owner', primaryjoin=
"foreign(User.userId)==remote(Owner.userId)", uselist=True,
 backref=db.backref('user', uselist=False))

users_seen = dict()

@orm.reconstructor
def init_on_load(self):
self.times_name_read_since_launch = 0

@classmethod
def lookup_by_name(cls, name):
user_ = User.users_seen.get(name)
if user_ is None:
user_ = User.query.filter(User.UserName == name).one()
return user_

@classmethod
def store_by_name(cls, user_):
if user_.UserName not in User.users_seen:
User.users_seen[user_.UserName] = user_

@property
def name(self):
self.times_name_read_since_launch += 1
return self.UserName


@app.route("/users/")
def user(name):
user_ = User.query.filter(User.UserName == name).one()
return "{} with {} - asked {} times".format(user_.name, user_.owners, 
user_.times_name_read_since_launch)


@app.route("/users_alt/")
def user_alt(name):
user_ = User.lookup_by_name(name)
User.store_by_name(user_)
owners = None
if user_.times_name_read_since_launch > 0:
# don't lazy load addresses the first request, simulates more 
complex actual behavior desired
owners = user_.owners
return "{} with {} - asked {} times".format(user_.name, owners, user_.
times_name_read_since_launch)


db.create_all()


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/66717a53-d81b-4cce-be68-601c5d673284%40googlegroups.com.


Re: [sqlalchemy] Filtering declarative by concatenated columns against Mircosoft SQL Server

2016-08-15 Thread rob

On Monday, August 15, 2016 at 8:35:17 AM UTC-5, Mike Bayer wrote:
>
>
>
> On 08/15/2016 12:02 AM, r...@rosenfeld.to  wrote: 
> > Hi All, 
> > 
> > I'm not sure whether I've done something wrong, hit an unsupported 
> > version of SQL Server, or what. 
> > 
> > I have a Person table in my SQL Server database that I'm reflecting via 
> > `autoload` 
> > | 
> > 
> > classPerson(Base): 
> > __table__ =Table('Person',meta,autoload=True,autoload_with=engine) 
> > 
> > | 
> > 
> > The table has LastName and FirstName columns, but I want to search for 
> > substrings in the concatenation of FirstName and LastName.  The 
> > following query works direct against the SQLServer 
> > | 
> > 
> > SELECT *FROM dbo.PersonWHERE (FirstName+' '+LastName)LIKE '%ob Smi%' 
> > 
> > | 
> > 
> > Note that trying to use a `CONCAT` function returns the error. 
> > | 
> > 
> > SELECT *FROM dbo.PersonWHERE concat(FirstName+' '+LastName)LIKE '%ob 
> Smi%' 
> > *[S00010][195]'concat'isnota recognized built-infunctionname.** 
>
>
> string concatenation is available as an operator, the + sign will 
> compile down to the concatenation operator itself (which is usually 
> '||', I thought this was the case for SQL server also).  assuming string 
> concatenation operators still work on SQL server 2012 I'd forego the 
> direct use of "concat" 
>
>   matches = Person.query.filter(Person.FirstName + ' ' ' + 
> Person.LastName).like(name_substring)).all() 
>
>
>
Thanks.  That did it.

 

>
> > * 
> > 
> > | 
> > It seems that CONCAT was added in SQLServer 2012 and later: 
> >  https://msdn.microsoft.com/en-us/library/hh231515.aspx 
> > 
> > Trying to replicate this query via SQLAlchemy, the following executes, 
> > but generates invalid SQL including a call to CONCAT with the error 
> below. 
> > | 
> > 
> > matches =Person.query.filter(db.func.concat(Person.FirstName,' 
> > ',Person.LastName).like(name_substring)).all() 
> > 
> > | 
> > 
> > | 
> > 
> sqlalchemy.exc.ProgrammingError:(pyodbc.ProgrammingError)('42000','[42000] 
> > [FreeTDS][SQL Server]Statement(s) could not be prepared. (8180) 
> > (SQLExecDirectW)')[SQL:'SELECT Person_1.[LastName] AS 
> > [dbo_Person_LastName], Person_1.[FirstName] AS 
> > [dbo_Person_FirstName]\nFROM dbo.Person AS Person_1 \nWHERE 
> > concat(Person_1.[FirstName], ?, Person_1.[LastName]) LIKE 
> > ?'][parameters:(' ','ob Smi')] 
> > | 
> > 
> > 
> > I don't see any mention in the documentation of how to indicate the SQL 
> > Server version that queries should be compiled for. 
> > 
> > Any suggestions? 
> > 
> > Thanks, 
> > Rob 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com >. 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > <mailto:sqlal...@googlegroups.com >. 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Filtering declarative by concatenated columns against Mircosoft SQL Server

2016-08-14 Thread rob
Hi All,

I'm not sure whether I've done something wrong, hit an unsupported version 
of SQL Server, or what.   

I have a Person table in my SQL Server database that I'm reflecting via 
`autoload`

class Person(Base):
__table__ = Table('Person', meta, autoload=True, autoload_with=engine)


The table has LastName and FirstName columns, but I want to search for 
substrings in the concatenation of FirstName and LastName.  The following 
query works direct against the SQLServer

SELECT * FROM dbo.Person WHERE (FirstName + ' ' + LastName) LIKE '%ob Smi%'


Note that trying to use a `CONCAT` function returns the error.  

SELECT * FROM dbo.Person WHERE concat(FirstName + ' ' + LastName) LIKE '%ob 
Smi%'
*[S00010][195] 'concat' is not a recognized built-in function name.*

It seems that CONCAT was added in SQLServer 2012 and later: 
 https://msdn.microsoft.com/en-us/library/hh231515.aspx 

Trying to replicate this query via SQLAlchemy, the following executes, but 
generates invalid SQL including a call to CONCAT with the error below.

matches = Person.query.filter(db.func.concat(Person.FirstName, ' ', 
Person.LastName).like(name_substring)).all()


sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] 
[FreeTDS][SQL Server]Statement(s) could not be prepared. (8180) 
(SQLExecDirectW)') [SQL: 'SELECT Person_1.[LastName] AS 
[dbo_Person_LastName], Person_1.[FirstName] AS [dbo_Person_FirstName]\nFROM 
dbo.Person AS Person_1 \nWHERE concat(Person_1.[FirstName], ?, 
Person_1.[LastName]) LIKE ?'] [parameters: (' ', 'ob Smi')]


I don't see any mention in the documentation of how to indicate the SQL 
Server version that queries should be compiled for.  

Any suggestions?

Thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] relationship joined to max() on a foreign table - help needed

2016-04-26 Thread Rob Fowler


Can anyone help me with this interesting (to me) relationship definition in 
sqla.


I have some already defined data, not by me, that has, for each user, a 
foreign table of contact numbers, where the highest contact number is the 
one to use. 


For example, users:

Harry, 1081, and Bob 1082

mobiles:

1081;"0412590410", 1082;"0426236245", 1082;"0416253647"


With:

SELECT u.id, u.first_name, m1.mobile

  FROM model.tbl_users as u

  JOIN model.tbl_mobiles as m1

  ON m1.id = (select max(m2.id) from model.tbl_mobiles as m2 where 
m2.user_id = u.id)


gives:


1081;"Harry";"0412590410"

1082;"Bob";"0416253647"


What I want to do is model a relationship on the user (eventually with 
no_list) where the field is simply the highest number.


I have been hacking away at a try:


mobile = relationship("Mobile",

   primaryjoin="Mobile.id == 
select([func.max(Mobile.id)]).where(Mobile.user_id == User.id).as_scalar()",

  viewonly=True)


but I can't seem to get the join right. I always get:


ArgumentError: Relationship User.mobile could not determine any unambiguous 
local/remote column pairs based on join condition and remote_side 
arguments. Consider using the remote() annotation to accurately mark those 
elements of the join condition that are on the remote side of the 
relationship.


I have tried all sorts of somewhat random remote() and foriegn() bits 
around the query to no avail.


I have a complete example here:

https://gist.github.com/mianos/42cf15928f27cc9dfde9996d2e593e78


Ideas? I am sure it's possible. At the moment I am just using a "orderby 
desc" on the relationship and using [0] to get the first.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Temporarily drop and re-create constraint

2015-10-31 Thread rob
On Friday, October 30, 2015 at 6:02:47 PM UTC-5, r...@rosenfeld.to wrote:
>
> I would like to temporarily drop a foreign key constraint while loading 
> data and then revert the constraint's removal when done.   I'm hoping to do 
> this without needing any specific knowledge of the constraints definition 
> at the time I re-create it.   By that I mean something similar to this 
> fantasy example that's missing a couple key imaginary functions.
>
> inspector = sqlalchemy.engine.reflection.Inspector.from_engine(engine)
> foreign_keys = inspector.get_foreign_keys(table_name)
> for foreign_key in foreign_keys:
>if foreign_key['name'] == key_name_to_drop:
>foreign_key_data_to_recreate = foreign_key
>sqlalchemy.schema.DropConstraint(SOME_FUNCTION(
> foreign_key_data_to_recreate))
>
> # Load the data
>
> sqlalchemy.schema.AddConstraint(SOME_OTHER_FUNCTION(
> foreign_key_data_to_recreate))
>
> The above is just one way I imagine it could work.  But maybe instead of 
> `get_foreign_keys` and grabbing the data, I can directly grab the 
> ForeignKey object of interest to delete from the table and add back later.
>
> My goal is to not need to re-code the details of the foreign key at the 
> time I need to reapply it to the table.
>
>
> Thanks much,
> Rob
>


I've been working on this some more and have figured out something that 
works.   I'm guessing it could be generalized to work with any constraint, 
but won't work on that until I need it.  The metadata I'm using is the one 
I pass to declarative_base, but I'm not sure it matters.

from sqlalchemy.schema import DropConstraint
from sqlalchemy.schema import AddConstraint
from sqlalchemy import Table


class WorkWithoutForeignKey(object):
def __init__(self, engine, table_name, foreign_key_name):
table = Table(table_name, metadata, autoload=True, 
autoload_with=engine)
for column in table.columns:
foreign_keys = column.foreign_keys
for foreign_key in foreign_keys:
if foreign_key.constraint.name == foreign_key_name:
self.foreign_key_constraint = foreign_key.constraint

self.connection = engine.connect()

def __enter__(self):
self.connection.execute(DropConstraint(self.foreign_key_constraint))

def __exit__(self, exc_type, exc_val, exc_tb):
self.connection.execute(AddConstraint(self.foreign_key_constraint))


with WorkWithoutForeignKey(engine, 'my_table', 
'fk_schema_table_column_foreigncolumn'):
# Load the data





 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Temporarily drop and re-create constraint

2015-10-30 Thread rob
I would like to temporarily drop a foreign key constraint while loading 
data and then revert the constraint's removal when done.   I'm hoping to do 
this without needing any specific knowledge of the constraints definition 
at the time I re-create it.   By that I mean something similar to this 
fantasy example that's missing a couple key imaginary functions.

inspector = sqlalchemy.engine.reflection.Inspector.from_engine(engine)
foreign_keys = inspector.get_foreign_keys(table_name)
for foreign_key in foreign_keys:
   if foreign_key['name'] == key_name_to_drop:
   foreign_key_data_to_recreate = foreign_key
   sqlalchemy.schema.DropConstraint(SOME_FUNCTION(
foreign_key_data_to_recreate))

# Load the data

sqlalchemy.schema.AddConstraint(SOME_OTHER_FUNCTION(
foreign_key_data_to_recreate))

The above is just one way I imagine it could work.  But maybe instead of 
`get_foreign_keys` and grabbing the data, I can directly grab the 
ForeignKey object of interest to delete from the table and add back later.

My goal is to not need to re-code the details of the foreign key at the 
time I need to reapply it to the table.


Thanks much,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Referring to __tablename__ as defined by a mixin's delarative_attr

2015-08-31 Thread rob
Thanks

On Sunday, August 30, 2015 at 10:28:58 PM UTC-5, Michael Bayer wrote:
>
>
>
> On 8/30/15 1:58 PM, r...@rosenfeld.to  wrote:
>
> Hi All, 
>
> I'm trying to replace string definitions prgrammatically where possible.   
> I have the following working:
>
> from sqlalchemy import Column
> from sqlalchemy import Integer
> from sqlalchemy import ForeignKey
> from sqlalchemy.orm import relationship
> from sqlalchemy.ext.declarative import declarative_base
>
> import pprint
>
>
> Base = declarative_base()
>
>
> class Parent(Base):
> __tablename__ = 'parent'
> id = Column(Integer, primary_key=True)
> children = relationship("Child", backref=__tablename__)
>
>
> class Child(Base):
> __tablename__ = 'child'
> id = Column(Integer, primary_key=True)
> parent_id = Column(Integer, ForeignKey('parent.id'))
>
>
> p = Parent()
> pprint.pprint('p table name: {}'.format(p.__tablename__))
> c = Child()
> pprint.pprint('c table name: {}'.format(c.__tablename__))
> c.parent = p
> pprint.pprint('Children: {}'.format(p.children))
> pprint.pprint('Parent: {}'.format(c.parent))
>
>
> But I'd like to follow the mixin example from the docs to inherit common 
> columns and a function for __tablename__ via the Common class.
>
> from sqlalchemy import Column
> from sqlalchemy import Integer
> from sqlalchemy import ForeignKey
> from sqlalchemy.orm import relationship
> from sqlalchemy.ext.declarative import declarative_base
> from sqlalchemy.ext.declarative import declared_attr
>
> import pprint
> from inflection import underscore
>
>
> class Common(object):
> @declared_attr
> def __tablename__(cls):
> return underscore(cls.__name__)
> id = Column(Integer, primary_key=True)
>
>
> Base = declarative_base(cls=Common)
>
>
> class Parent(Base):
> children = relationship("Child", backref=__tablename__)
>
>
> class Child(Base):
> parent_id = Column(Integer, ForeignKey('parent.id'))
>
>
> p = Parent()
> pprint.pprint('p table name: {}'.format(p.__tablename__))
> c = Child()
> pprint.pprint('c table name: {}'.format(c.__tablename__))
> c.parent = p
> pprint.pprint('Children: {}'.format(p.children))
> pprint.pprint('Parent: {}'.format(c.parent))
>
>
>
> I cannot figure out how to refer to the __tablename__ property in backref 
> using this model. 
>
> well it needs to be a callable function that is called when the "cls", in 
> this case Parent, is available.  It would be awkward but you'd need to do 
> it like this:
>
> @declared_attr
> def children(cls):
> return relationship("Child", backref=cls.__tablename__)
>
> this is because the way it is now, "children = relationship" needs to know 
> the name "Parent" which is explicitly not available yet, because you're in 
> a class block named "Parent" - that name isn't known to the script until 
> you exit that block. relationship() allows a lot of its members to be 
> passed as functions but backref / back_populates aren't one of them because 
> these have to do with the class mapping, rather than tables/columns which 
> is why the deferred approach is provided.
>
> You can make your own function that generates that @declared_attr in one 
> step to save on verbosity.
>
>
>
>
>
>  If I refer to Base.__tablename__ or Common.__tablename__, it gets the 
> wrong value.   Can you help me fix this?
>
> Thanks,
> Rob
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com .
> To post to this group, send email to sqlal...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Referring to __tablename__ as defined by a mixin's delarative_attr

2015-08-30 Thread rob
Hi All,

I'm trying to replace string definitions prgrammatically where possible.   
I have the following working:

from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.ext.declarative import declarative_base

import pprint


Base = declarative_base()


class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
children = relationship(Child, backref=__tablename__)


class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parent.id'))


p = Parent()
pprint.pprint('p table name: {}'.format(p.__tablename__))
c = Child()
pprint.pprint('c table name: {}'.format(c.__tablename__))
c.parent = p
pprint.pprint('Children: {}'.format(p.children))
pprint.pprint('Parent: {}'.format(c.parent))


But I'd like to follow the mixin example from the docs to inherit common 
columns and a function for __tablename__ via the Common class.

from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.declarative import declared_attr

import pprint
from inflection import underscore


class Common(object):
@declared_attr
def __tablename__(cls):
return underscore(cls.__name__)
id = Column(Integer, primary_key=True)


Base = declarative_base(cls=Common)


class Parent(Base):
children = relationship(Child, backref=__tablename__)


class Child(Base):
parent_id = Column(Integer, ForeignKey('parent.id'))


p = Parent()
pprint.pprint('p table name: {}'.format(p.__tablename__))
c = Child()
pprint.pprint('c table name: {}'.format(c.__tablename__))
c.parent = p
pprint.pprint('Children: {}'.format(p.children))
pprint.pprint('Parent: {}'.format(c.parent))



I cannot figure out how to refer to the __tablename__ property in backref 
using this model.  If I refer to Base.__tablename__ or 
Common.__tablename__, it gets the wrong value.   Can you help me fix this?

Thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Column vs Table constraints for Column(unique=True) in Postgres

2015-05-29 Thread rob
Thank Lucas.   I've tried that as well.   In all cases, SQLAlchemy always 
emits a table constraint.  i.e., an additional CONSTRAINT clause in the 
CREATE TABLE command.  Maybe I've poorly phrased my question and SQLAlchemy 
always emits table constraints?  Here's an updated example.  In all three 
cases a CONSTRAINT clause is used to create a table constraint, in none 
does it extend the column definition with the constraint.

My reasons for wanting this are on the foolish side, so it's not critical. 
 If there's nothing obvious, I appreciate your help and will let this go.  

Instead of:
CREATE TABLE test_3 (
id SERIAL NOT NULL,
alt_id INTEGER,
CONSTRAINT uq_test_3_alt_id UNIQUE (alt_id)
)

I was looking for:
CREATE TABLE test_3 (
id SERIAL NOT NULL,
alt_id INTEGER CONSTRAINT uq_test_3_alt_id UNIQUE
)



from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import MetaData
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import UniqueConstraint
from sqlalchemy import create_engine


Base = declarative_base()

class Test1(Base):
__tablename__ = 'test_1'

id = Column(Integer, primary_key=True)
alt_id = Column(Integer, nullable=True, default=None, unique=True)


class Test2(Base):
__tablename__ = 'test_2'

id = Column(Integer, primary_key=True)
alt_id = Column(Integer, nullable=True, default=None)

__table_args__ = (UniqueConstraint('alt_id', name='uq_alt_id'),)


active_db_url = 'postgres://user:pass@10.10.10.10/db'
engine = create_engine(active_db_url, echo=False)

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

convention = {
ix: 'ix_%(column_0_label)s',
uq: uq_%(table_name)s_%(column_0_name)s,
ck: ck_%(table_name)s_%(constraint_name)s,
fk: fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s,
pk: pk_%(table_name)s
}
metadata = MetaData(naming_convention=convention)
Base = declarative_base(metadata=metadata)

class Test3(Base):
__tablename__ = 'test_3'

id = Column(Integer, primary_key=True)
alt_id = Column(Integer, nullable=True, default=None)

__table_args__ = (UniqueConstraint('alt_id'),)

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)


And the output:
CREATE TABLE test_1 (
id SERIAL NOT NULL,
alt_id INTEGER,
PRIMARY KEY (id),
UNIQUE (alt_id)
)

CREATE TABLE test_2 (
id SERIAL NOT NULL,
alt_id INTEGER,
PRIMARY KEY (id),
CONSTRAINT uq_alt_id UNIQUE (alt_id)
)

CREATE TABLE test_3 (
id SERIAL NOT NULL,
alt_id INTEGER,
CONSTRAINT pk_test_3 PRIMARY KEY (id),
CONSTRAINT uq_test_3_alt_id UNIQUE (alt_id)
)



On Wednesday, May 20, 2015 at 9:15:35 PM UTC-5, Lucas Taylor wrote:

 Unless you provide a name, the constraint will be anonymously named, so 
 there is no difference between that and the shortcut.

 Provide a name argument to UniqueConstraint:

 __table_args__ = (UniqueConstraint('alt_id', name='uq_alt_id'),)

 You may also be interested in providing a naming convention to automate 
 this:

 http://docs.sqlalchemy.org/en/latest/core/constraints.html#configuring-constraint-naming-conventions





 On Sunday, May 17, 2015 at 7:49:39 PM UTC-5, r...@rosenfeld.to wrote:

 Sorry it took my a while to test this, but I didn't see any difference in 
 the SQL emitted.  What did I miss?

 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column
 from sqlalchemy import Integer
 from sqlalchemy import UniqueConstraint
 from sqlalchemy import create_engine

 Base = declarative_base()


 class Test1(Base):
 __tablename__ = 'test_1'

 id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None, unique=True)


 class Test2(Base):
 __tablename__ = 'test_2'

 id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None)

 __table_args__ = (UniqueConstraint('alt_id'),)

 active_db_url = 'postgres://user:pass@10.10.10.10/db'
 engine = create_engine(active_db_url, echo=False)

 Base.metadata.drop_all(engine)
 Base.metadata.create_all(engine)

 And here's what I see in the log:
 CREATE TABLE test_2 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 PRIMARY KEY (id),
 UNIQUE (alt_id)
 )

 REATE TABLE test_1 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 PRIMARY KEY (id),
 UNIQUE (alt_id)
 )




 On Sunday, May 3, 2015 at 10:23:31 PM UTC-5, Michael Bayer wrote:

  sure, use UniqueConstraint directly.  It's better to use that than the 
 unique=True flag in any case.



 On 5/3/15 10:29 PM, r...@rosenfeld.to wrote:
  
  Is there a way to control whether DDL emitted by SQLAlchemy uses a 
 column and/or table constraint for uniqueness?

  It seems the following 
  class Part(Base):
 __tablename__ = 'part'
 third_party_id = Column(Integer, nullable=True, default=None, 

Re: [sqlalchemy] Column vs Table constraints for Column(unique=True) in Postgres

2015-05-29 Thread rob
Thanks.  Makes sense.

On Friday, May 29, 2015 at 10:12:07 AM UTC-5, Michael Bayer wrote:

  

 On 5/29/15 2:12 AM, r...@rosenfeld.to javascript: wrote:
  
 Thank Lucas.   I've tried that as well.   In all cases, SQLAlchemy always 
 emits a table constraint.  i.e., an additional CONSTRAINT clause in the 
 CREATE TABLE command.  Maybe I've poorly phrased my question and SQLAlchemy 
 always emits table constraints?  Here's an updated example.  In all three 
 cases a CONSTRAINT clause is used to create a table constraint, in none 
 does it extend the column definition with the constraint.


 there's no difference between a UNIQUE constraint declared at the column 
 or table level from the database's perspective.  SQLA focuses on table 
 level because that's where there's the most options, keeps things 
 simplest.   

  
  My reasons for wanting this are on the foolish side, so it's not 
 critical.  If there's nothing obvious, I appreciate your help and will let 
 this go.  

  Instead of:
   CREATE TABLE test_3 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 CONSTRAINT uq_test_3_alt_id UNIQUE (alt_id)
 )
   
 I was looking for:
   CREATE TABLE test_3 (
 id SERIAL NOT NULL,
 alt_id INTEGER CONSTRAINT uq_test_3_alt_id UNIQUE
 )
   


   from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import MetaData
 from sqlalchemy import Column
 from sqlalchemy import Integer
 from sqlalchemy import UniqueConstraint
 from sqlalchemy import create_engine

  
  Base = declarative_base()

  class Test1(Base):
 __tablename__ = 'test_1'

  id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None, unique=True)

  
  class Test2(Base):
 __tablename__ = 'test_2'

  id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None)

  __table_args__ = (UniqueConstraint('alt_id', name='uq_alt_id'),)

  
  active_db_url = 'postgres:/...@10.10.10.10/db javascript:'
 engine = create_engine(active_db_url, echo=False)

  Base.metadata.drop_all(engine)
 Base.metadata.create_all(engine)

  convention = {
 ix: 'ix_%(column_0_label)s',
 uq: uq_%(table_name)s_%(column_0_name)s,
 ck: ck_%(table_name)s_%(constraint_name)s,
 fk: fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s,
 pk: pk_%(table_name)s
 }
 metadata = MetaData(naming_convention=convention)
 Base = declarative_base(metadata=metadata)

  class Test3(Base):
 __tablename__ = 'test_3'

  id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None)

  __table_args__ = (UniqueConstraint('alt_id'),)

  Base.metadata.drop_all(engine)
 Base.metadata.create_all(engine)

   
 And the output:
   CREATE TABLE test_1 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 PRIMARY KEY (id),
 UNIQUE (alt_id)
 )

 CREATE TABLE test_2 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 PRIMARY KEY (id),
 CONSTRAINT uq_alt_id UNIQUE (alt_id)
 )

 CREATE TABLE test_3 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 CONSTRAINT pk_test_3 PRIMARY KEY (id),
 CONSTRAINT uq_test_3_alt_id UNIQUE (alt_id)
 )
   


 On Wednesday, May 20, 2015 at 9:15:35 PM UTC-5, Lucas Taylor wrote: 

  Unless you provide a name, the constraint will be anonymously named, so 
 there is no difference between that and the shortcut.
  
  Provide a name argument to UniqueConstraint:

   __table_args__ = (UniqueConstraint('alt_id', name='uq_alt_id'),)
  
  You may also be interested in providing a naming convention to automate 
 this:

 http://docs.sqlalchemy.org/en/latest/core/constraints.html#configuring-constraint-naming-conventions
  
  
  
  
  
 On Sunday, May 17, 2015 at 7:49:39 PM UTC-5, r...@rosenfeld.to wrote: 

 Sorry it took my a while to test this, but I didn't see any difference 
 in the SQL emitted.  What did I miss? 

   from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column
 from sqlalchemy import Integer
 from sqlalchemy import UniqueConstraint
 from sqlalchemy import create_engine

  Base = declarative_base()

  
  class Test1(Base):
 __tablename__ = 'test_1'

  id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None, unique=True)

  
  class Test2(Base):
 __tablename__ = 'test_2'

  id = Column(Integer, primary_key=True)
 alt_id = Column(Integer, nullable=True, default=None)

  __table_args__ = (UniqueConstraint('alt_id'),)

  active_db_url = 'postgres://user:pass@10.10.10.10/db'
 engine = create_engine(active_db_url, echo=False)

  Base.metadata.drop_all(engine)
 Base.metadata.create_all(engine)
  
 And here's what I see in the log:
   CREATE TABLE test_2 (
 id SERIAL NOT NULL,
 alt_id INTEGER,
 PRIMARY KEY (id),
 UNIQUE (alt_id)
 )

 REATE TABLE test_1 (
 id SERIAL NOT NULL,
 alt_id 

Re: [sqlalchemy] Column vs Table constraints for Column(unique=True) in Postgres

2015-05-17 Thread rob
Sorry it took my a while to test this, but I didn't see any difference in 
the SQL emitted.  What did I miss?

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import UniqueConstraint
from sqlalchemy import create_engine

Base = declarative_base()


class Test1(Base):
__tablename__ = 'test_1'

id = Column(Integer, primary_key=True)
alt_id = Column(Integer, nullable=True, default=None, unique=True)


class Test2(Base):
__tablename__ = 'test_2'

id = Column(Integer, primary_key=True)
alt_id = Column(Integer, nullable=True, default=None)

__table_args__ = (UniqueConstraint('alt_id'),)

active_db_url = 'postgres://user:pass@10.10.10.10/db'
engine = create_engine(active_db_url, echo=False)

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

And here's what I see in the log:
CREATE TABLE test_2 (
id SERIAL NOT NULL,
alt_id INTEGER,
PRIMARY KEY (id),
UNIQUE (alt_id)
)

REATE TABLE test_1 (
id SERIAL NOT NULL,
alt_id INTEGER,
PRIMARY KEY (id),
UNIQUE (alt_id)
)




On Sunday, May 3, 2015 at 10:23:31 PM UTC-5, Michael Bayer wrote:

  sure, use UniqueConstraint directly.  It's better to use that than the 
 unique=True flag in any case.



 On 5/3/15 10:29 PM, r...@rosenfeld.to javascript: wrote:
  
  Is there a way to control whether DDL emitted by SQLAlchemy uses a 
 column and/or table constraint for uniqueness?

  It seems the following 
  class Part(Base):
 __tablename__ = 'part'
 third_party_id = Column(Integer, nullable=True, default=None, unique=
 True)
  
  
  emits a table constraint
  CREATE TABLE part (
 third_party_id INTEGER, 
 CONSTRAINT uq_part_third_party_id UNIQUE (third_party_id)
 )

  

  
  Is it possible to emit the following with a column constraint instead?
  CREATE TABLE part (
 third_party_id INTEGER CONSTRAINT uq_part_third_party_id UNIQUE
 )
  
  Thanks
  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Column vs Table constraints for Column(unique=True) in Postgres

2015-05-03 Thread rob
Is there a way to control whether DDL emitted by SQLAlchemy uses a column 
and/or table constraint for uniqueness?

It seems the following 
class Part(Base):
__tablename__ = 'part'
third_party_id = Column(Integer, nullable=True, default=None, unique=
True)


emits a table constraint
CREATE TABLE part (
third_party_id INTEGER, 
CONSTRAINT uq_part_third_party_id UNIQUE (third_party_id)
)




Is it possible to emit the following with a column constraint instead?
CREATE TABLE part (
third_party_id INTEGER CONSTRAINT uq_part_third_party_id UNIQUE
)

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to output SQLAlchemy logger only to a file?

2015-03-19 Thread rob
Thanks much.  That was it.

On Thursday, March 19, 2015 at 6:33:27 PM UTC-5, Michael Bayer wrote:



 r...@rosenfeld.to javascript: wrote: 

  From what I read in the SQLAlchemy logging configuration documentation, 
 I understood that the echo argument on sqlalchemy.create_engine controls 
 whether sqlalchemy logging is forced to stdout, but shouldn't affect 
 whether log messages are available to log handlers. 
  
  In the code below, I get no output to stdout OR db.log if echo=False and 
 I get output to both stdout AND db.log if echo=True. I want nothing to 
 stdout while db.log is still populated. How can I accomplish that? 

 Hmm well logging is looking at the effective level of the logger 
 itself, not the handler, so you’d need db_logger.setLevel(db_log_level). I 
 think the “level” that’s on the handler is an additional level of 
 filtering. 

 The effective level of a logger you can see like this: 

  import logging 
  log = logging.getLogger(asdf) 
  log.getEffectiveLevel() 
 30 

 which we can see is WARN: 

  logging.WARN, logging.INFO, logging.DEBUG 
 (30, 20, 10) 


 the “echo=True” flag sets up this level if not set already. 




  
  This is python 2.7.6 and sqlalchemy 0.9.9 
  
  import 
   sqlalchemy 
  
  import 
   logging 
  
  active_db_url 
  = 'postgres://user:pass@localhost/log_test' 
  
  
  db_log_file_name 
  = 'db.log' 
  
  db_log_level 
  = logging. 
  INFO 
  
  db_handler 
  = logging.FileHandler(db_log_file_name) 
  
  db_handler 
  .setLevel(db_log_level) 
  
  
  db_logger 
  = logging.getLogger('sqlalchemy') 
  
  db_logger 
  .addHandler(db_handler) 
  
  
  engine 
  = sqlalchemy.create_engine(active_db_url, echo=True) 
  
  engine 
  .connect() 
  
  Cross posted from 
 http://stackoverflow.com/questions/29114627/how-to-output-sqlalchemy-logger-only-to-a-file
  
  
  Thanks in advance, 
  
  Rob 
  
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups sqlalchemy group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an email to sqlalchemy+...@googlegroups.com javascript:. 
  To post to this group, send email to sqlal...@googlegroups.com 
 javascript:. 
  Visit this group at http://groups.google.com/group/sqlalchemy. 
  For more options, visit https://groups.google.com/d/optout. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to output SQLAlchemy logger only to a file?

2015-03-19 Thread rob


From what I read in the SQLAlchemy logging configuration documentation, I 
understood that the echo argument on sqlalchemy.create_engine controls 
whether sqlalchemy logging is forced to stdout, but shouldn't affect 
whether log messages are available to log handlers.

In the code below, I get no output to stdout OR db.log if echo=False and I 
get output to both stdout AND db.log if echo=True. I want nothing to stdout 
while db.log is still populated. How can I accomplish that?

This is python 2.7.6 and sqlalchemy 0.9.9

import sqlalchemyimport logging

active_db_url = 'postgres://user:pass@localhost/log_test'

db_log_file_name = 'db.log'
db_log_level = logging.INFO

db_handler = logging.FileHandler(db_log_file_name)
db_handler.setLevel(db_log_level)

db_logger = logging.getLogger('sqlalchemy')
db_logger.addHandler(db_handler)

engine = sqlalchemy.create_engine(active_db_url, echo=True)
engine.connect()


Cross posted 
from 
http://stackoverflow.com/questions/29114627/how-to-output-sqlalchemy-logger-only-to-a-file
 

Thanks in advance,

Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] dogpile cache async_creation_runner question (actually with sqlalchemy but the example has the sqla factored out)

2015-01-04 Thread Rob Fowler
Thanks for the reply Mike. The explanation is somewhat as expected.
Based on this, to keep things simple and being bone lazy, I switched the 
the 'redis' backend, whose locking is outside of the process and works 100% 
with multiple threads out of the box.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] dogpile cache async_creation_runner question (actually with sqlalchemy but the example has the sqla factored out)

2015-01-03 Thread Rob Fowler
I have some costly aggregations (they take a minute) and I am firing off 
the aggregations in a separate thread and want to use the dogpile lock so 
that the client will receive the 'recent' data while the aggregation is 
occurring in the background.
Using flask and sqla (as usual, sqla rocks, as usual). 

My problem is, if I run an async_creation_runner, as per the recipe, it's 
running in a separate thread and dogpile complains: 

Traceback (most recent call last):
  File /usr/lib/python2.7/threading.py, line 551, in __bootstrap_inner
self.run()
  File /usr/lib/python2.7/threading.py, line 504, in run
self.__target(*self.__args, **self.__kwargs)
  File ./ct.py, line 35, in runner
mutex.release()
  File 
/home/rfo/ve/local/lib/python2.7/site-packages/dogpile/cache/util.py, 
line 182, in release
assert keys is not None, this thread didn't do the acquire
AssertionError: this thread didn't do the acquire

As far as I can see, this is true, the mutex was created in the calling 
thread, not in the async generation thread so this is correct.
Can anyone tell me how I am meant to structure the generation with the 
mutexes?
(It seems to me I should be using a proper system level mutex but I assumed 
that was the purpose of the underlying dogpile locking)

This is the gist with my 
code: https://gist.github.com/mianos/72ba45aed2824875c1a6
Pretty much copied from the example.

Thanks in advance.
- Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How can I set table constraints with DeferredReflection?

2014-04-02 Thread Rob Crowell
I'm using the ORM and one of my tables does not have a primary key defined. 
 I am also using DeferredReflection, and I can't seem to figure out how to 
defer the PrimaryKeyConstraint until Base.prepare() runs.  Any pointers?

Base = declarative_base(cls=DeferredReflection)

class Person(Base):
__tablename__ = 'people'
__table_args__ = (PrimaryKeyConstraint(u'name'), {})

# this does not run
if __name__ == '__main__':
engine = create_engine('mysql://user:password@localhost/organisms')
Base.prepare(engine)


When this runs, we get an error constructing the Person class (the __main__ 
section is not hit).  This makes sense given that we haven't reflected the 
table yet!

Traceback (most recent call last):
  File /home/ubuntu/deferred_reflection.py, line 10, in module
class Person(Base):
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/api.py, 
line 53, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/base.py, 
line 251, in _as_declarative
**table_kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 350, in __new__
table._init(name, metadata, *args, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 427, in _init
self._init_items(*args)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 70, in _init_items
item._set_parent_with_dispatch(self)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/base.py, 
line 283, in _set_parent_with_dispatch
self._set_parent(parent)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 2646, in _set_parent
super(PrimaryKeyConstraint, self)._set_parent(table)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 2289, in _set_parent
ColumnCollectionMixin._set_parent(self, table)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/schema.py, 
line 2257, in _set_parent
col = table.c[col]
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/_collections.py, 
line 156, in __getitem__
return self._data[key]
KeyError: u'name'

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Trouble filtering binary columns with tuple_ and an in_ clause against Postgres

2014-02-28 Thread Rob Crowell
Looks good to me :)

...
--
Ran 3 tests in 0.811s

OK


Thanks for you really fast help with this.  Michael Bayer for president!!


On Thursday, February 27, 2014 7:58:24 PM UTC-5, Michael Bayer wrote:

 that patch is in for 0.8 and 0.9.

 On Feb 27, 2014, at 7:29 PM, Michael Bayer 
 mik...@zzzcomputing.comjavascript: 
 wrote:


 On Feb 27, 2014, at 4:38 PM, Rob Crowell rob.c...@moat.com javascript: 
 wrote:


 # in_ clause with 1 STRING, 1 BINARY
 filter_cols = tuple_(HashTest.hash_val, HashTest.hash_type)
 filter_vals = ((*encoded_hash*, 'md5'),)

 q = session.query(HashTest)
 q = q.filter(filter_cols.in_((filter_vals)))
  ^
 HINT:  No operator matches the given name and argument type(s). You might 
 need to add explicit type casts.
  'SELECT hashtest.hash_test.hash_val AS hashtest_hash_test_hash_val, 
 hashtest.hash_test.hash_type AS hashtest_hash_test_hash_type, 
 hashtest.hash_test.region AS hashtest_hash_test_region \nFROM 
 hashtest.hash_test \nWHERE (hashtest.hash_test.hash_val, 
 hashtest.hash_test.hash_type) IN ((%(param_1)s, %(param_2)s))' {'param_1': 
 psycopg2._psycopg.Binary object at 0x2d9d850, 'param_2': 
 psycopg2._psycopg.Binary object at 0x2c18940}


 It's complaining about an improper encoding for the BINARY column. 


 this has nothing to do with encoding and instead is about typing.   you 
 can see in the output that SQLAlchemy is turning both elements of the IN 
 tuple into a Binary which would appear to be inappropriate here - the 
 sqlalchemy.sql.Tuple object isn’t yet smart enough to handle heterogeneous 
 types.There’s a patch which will resolve this attached to 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/2977/tuple_-needs-to-record-heterogeneous-types.


 In the meantime you can actually lift and use that Tuple class that’s in 
 the patch:

 from sqlalchemy.sql.expression import ClauseList, ColumnElement, \
 _literal_as_binds, BindParameter
 from sqlalchemy import types

 class tuple_(ClauseList, ColumnElement):
 def __init__(self, *clauses, **kw):
 clauses = [_literal_as_binds(c) for c in clauses]
 self.type = types.NULLTYPE
 self._type_tuple = [arg.type for arg in clauses]

 super(tuple_, self).__init__(*clauses, **kw)

 @property
 def _select_iterable(self):
 return (self, )

 def _bind_param(self, operator, obj):
 return tuple_(*[
 BindParameter(None, o, _compared_to_operator=operator,
  _compared_to_type=type_, unique=True)
 for o, type_ in zip(obj, self._type_tuple)
 ]).self_group()






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Trouble filtering binary columns with tuple_ and an in_ clause against Postgres

2014-02-27 Thread Rob Crowell
When I pass binary data to a multi-column in_ clause, I seem to be geting 
inconsistent results and I need some help!  I did some testing with MySQL, 
Postgres, and Vertica (connecting via 
https://pypi.python.org/pypi/vertica-sqlalchemy/0.1).  It appears MySQL 
works correctly but both Postgres and Vertica (which is almost compatible 
with Posgres) I am having trouble.

I authored a simple test model in a schema named 'hashtest'.  The (silly) 
idea is to store binary hash values in the hash_val column and tag the type 
(i.e. md5) in hash_type -- I added a region column just so I would have 2 
non-string columns to play with:

Base = declarative_base()
class HashTest(Base):
__tablename__ = 'hash_test'
__table_args__ = {'schema': 'hashtest'}

hash_val = Column(Binary, primary_key=True)
hash_type = Column(String, primary_key=True)
region = Column(String)


Insert a single row:
hashtest= INSERT INTO hashtest.hash_test (hash_val, hash_type, region) 
VALUES (E'\\xf2666f453b364db65cfdd19756d7e0ad', 'md5', 'US');


For these tests I am using a binary string to store the md5 (rather than 16 
characters 0-F, using 16 raw bytes); the binascii module can do this 
transformation for us:

*encoded_hash* = '\xf2foE;6M\xb6\\\xfd\xd1\x97V\xd7\xe0\xad'

self.assertEquals(binascii.unhexlify('f2666f453b364db65cfdd19756d7e0ad'), 
*encoded_hash*)


We can pass this to a single-column IN clause no problem, and get our data 
back out (the unittest module provides assertEqual):

# in_ clause with 1 BINARY
filter_cols = tuple_(HashTest.hash_val)
filter_vals = ((*encoded_hash*,),)

q = session.query(HashTest)
q = q.filter(filter_cols.in_((filter_vals)))
rows = q.all()
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].hash_val, *encoded_hash*)

We can also build a 2-column in_ clause, with both of the string hash_type 
and region columns, and things also work as expected.  However, when I pass 
in a String and Binary value to a single in_ clause on hash_val and 
hash_type, things stop working:

# in_ clause with 1 STRING, 1 BINARY
filter_cols = tuple_(HashTest.hash_val, HashTest.hash_type)
filter_vals = ((*encoded_hash*, 'md5'),)

q = session.query(HashTest)
q = q.filter(filter_cols.in_((filter_vals)))
rows = q.all()
self.assertEqual(len(rows), 1)
self.assertEqual(rows[0].hash_val, *encoded_hash*)


MySQL happily responds with the desired results, but Postgres (and Vertica) 
seem to choke up:

==
ERROR: testPostgres (md5test.BinaryTupleInTest)
--
Traceback (most recent call last):
  File md5test.py, line 70, in testPostgres
self._lookupHash(session)
  File md5test.py, line 49, in _lookupHash
rows = q.all()
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, 
line 2286, in all
return list(self)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, 
line 2398, in __iter__
return self._execute_and_instances(context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, 
line 2413, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 717, in execute
return meth(self, multiparams, params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py, 
line 317, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 814, in _execute_clauseelement
compiled_sql, distilled_params
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 927, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 1076, in _handle_dbapi_exception
exc_info
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, 
line 185, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 920, in _execute_context
context)
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
426, in do_execute
cursor.execute(statement, parameters)
ProgrammingError: (ProgrammingError) operator does not exist: character 
varying = bytea
LINE 3: ...hash_test.hash_val, hashtest.hash_test.hash_type) IN (('\xf2...
 ^
HINT:  No operator matches the given name and argument type(s). You might 
need to add explicit type casts.
 'SELECT hashtest.hash_test.hash_val AS hashtest_hash_test_hash_val, 
hashtest.hash_test.hash_type AS hashtest_hash_test_hash_type, 
hashtest.hash_test.region AS hashtest_hash_test_region \nFROM 

[sqlalchemy] Help with DeferredReflection and setting up an engine at runtime?

2014-02-17 Thread Rob Crowell
I am having a bit of trouble getting DeferredReflection working the way I 
want; not sure if I am overlooking something obvious or if I just don't 
really understand how it's supposed to work.  

I'm trying to define my models before creating my engine (this does not 
work):

Base = declarative_base(cls=*DeferredReflection*)

class CityStats(Base):
__tablename__ = 'city_stats'
__table_args__ = {'schema': 'prod', *'autoload': True*}

if __name__ == '__main__':
engine = create_engine('...')
Base.metadata.bind = engine
Base.prepare(engine)


When I run this I get an error creating the CityStats class: 
sqlalchemy.exc.UnboundExecutionError: 
No engine is bound to this Table's MetaData. Pass an engine to the Table 
via autoload_with=someengine, or associate the MetaData with an engine 
via metadata.bind=someengine

Of course it does work if I create my engine and set Base.metadata.bind 
BEFORE I define the CityStats model (this works):

engine = create_engine('...')
Base.metadata.bind = engine

class CityStats(Base):
__tablename__ = 'city_stats'
__table_args__ = {'schema': 'prod', *'autoload': True*}

Base.prepare(engine)


I'm trying to avoid some kind of model_init() function that everyone who 
imports my models.py file will have to remember to call before importing my 
models.  Is this possible?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Help with DeferredReflection and setting up an engine at runtime?

2014-02-17 Thread Rob Crowell
Interesting, thanks Michael.  I didn't realize autoload was implied when 
using DeferredReflection but that makes sense.

Thanks!

On Monday, February 17, 2014 7:17:34 PM UTC-5, Michael Bayer wrote:


 On Feb 17, 2014, at 6:23 PM, Rob Crowell robcc...@gmail.com javascript: 
 wrote:

 I am having a bit of trouble getting DeferredReflection working the way I 
 want; not sure if I am overlooking something obvious or if I just don't 
 really understand how it's supposed to work.  

 I'm trying to define my models before creating my engine (this does not 
 work):

 Base = declarative_base(cls=*DeferredReflection*)

 class CityStats(Base):
 __tablename__ = 'city_stats'
 __table_args__ = {'schema': 'prod', *'autoload': True*}

 if __name__ == '__main__':
 engine = create_engine('...')
 Base.metadata.bind = engine
 Base.prepare(engine)


 When I run this I get an error creating the CityStats class: 
 sqlalchemy.exc.UnboundExecutionError: 
 No engine is bound to this Table's MetaData. Pass an engine to the Table 
 via autoload_with=someengine, or associate the MetaData with an engine 
 via metadata.bind=someengine”


 its all about the stack trace, lets look:

 Traceback (most recent call last):
   File test.py, line 8, in module
 class CityStats(Base):
   File 
 /Users/classic/dev/sqlalchemy/lib/sqlalchemy/ext/declarative/api.py, line 
 53, in __init__
 _as_declarative(cls, classname, cls.__dict__)
   File 
 /Users/classic/dev/sqlalchemy/lib/sqlalchemy/ext/declarative/base.py, 
 line 251, in _as_declarative
 **table_kw)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/schema.py, line 
 350, in __new__
 table._init(name, metadata, *args, **kw)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/schema.py, line 
 423, in _init
 self._autoload(metadata, autoload_with, include_columns)
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/schema.py, line 
 439, in _autoload
 msg=No engine is bound to this Table's MetaData. 
   File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/base.py, line 
 459, in _bind_or_error

 what we see here is that this script doesn’t get to create_engine() at 
 all, it’s trying to hit the database as soon as you say “CityStats(Base)”. 
  Why is that?  Because you have “autoload=True” in your table args, which 
 means, “reflect this table *right now*”.  That is, you are defeating the 
 purpose of using DeferredReflection.

 The solution is just take out that autoload=True.  Any class which 
 descends from the Base here is automatically part of the “classes to 
 reflect” since you have DeferredReflection at the base.

 Also, if you’re on 0.9 take a look at the new “automap” extension, I’ve 
 been using it and it’s pretty keen.  There’s one fix for it in 
 not-yet-released 0.9.3 but it’s only needed if you’re dealing with 
 inheritance structures.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] aliasing a CTE expression

2013-07-15 Thread Rob
(using sqlalchemy version 7.10  postgres),  I'm attempting to join a CTE 
twice in the same select statement.

see here http://www.sqlfiddle.com/#!12/c487e/14 for a SQLFiddle, which 
shows the full select statement which I'm trying to build but essentially 
the problem I'm facing is how to create aliases for the `WITH` such that 
a second join is able to reference the first -- the emboldened 'AS alias' 
below, highlights the problem. 

WITH semester_days AS ( ... blah ... ) 
SELECT ... blah ... 
FROM
  students as S
JOIN
  semester_days *as SD_start*
  On SD_start.day_date = S.start_date
JOIN
  semester_days *as SD_end*
  On SD_end.day_number = SD_start.day_number + (7 * S.n_weeks)
 
Attempting to use:
my_cte = select([ ... ]).cte(name='semester_days')
followed by:
cte_start = my_cte.alias('start_date')
cte_end  = my_cte.alias('end_date') 

... in ether a .select_from() appended to the select clause, or in a 
`from_obj` within the clause, simply blows the `with` statement off the top 
of the select. (if that makes any sense)  

Is there any way to accomplish this?

Many thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: aliasing a CTE expression

2013-07-15 Thread Rob
Thank you *so* much...


And apologies for not having included the object set up ... I thought it'd 
obscure more than it showed... 
But for the record, the `generate_series` and `row_number() over()` are 
spot on  :D



On Monday, 15 July 2013 18:58:00 UTC+1, Rob wrote:

 (using sqlalchemy version 7.10  postgres),  I'm attempting to join a CTE 
 twice in the same select statement.

 see here http://www.sqlfiddle.com/#!12/c487e/14 for a SQLFiddle, which 
 shows the full select statement which I'm trying to build but essentially 
 the problem I'm facing is how to create aliases for the `WITH` such that 
 a second join is able to reference the first -- the emboldened 'AS 
 alias' below, highlights the problem. 

 WITH semester_days AS ( ... blah ... ) 
 SELECT ... blah ... 
 FROM
   students as S
 JOIN
   semester_days *as SD_start*
   On SD_start.day_date = S.start_date
 JOIN
   semester_days *as SD_end*
   On SD_end.day_number = SD_start.day_number + (7 * S.n_weeks)
  
 Attempting to use:
 my_cte = select([ ... ]).cte(name='semester_days')
 followed by:
 cte_start = my_cte.alias('start_date')
 cte_end  = my_cte.alias('end_date') 

 ... in ether a .select_from() appended to the select clause, or in a 
 `from_obj` within the clause, simply blows the `with` statement off the top 
 of the select. (if that makes any sense)  

 Is there any way to accomplish this?

 Many thanks,
 Rob


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Including counts in query results

2013-06-03 Thread Rob Green
Hello,

I'm having some trouble getting my SQL query to work in SQLAlchemy.

In this example, I want to find out the number of meeting rooms and 
bathrooms available to a user.  The user is part of a userGroup, which is 
assigned to one building.
The building has many meeting rooms and bathrooms.  My sql query looks like:

select user.id, 
(select count(*) from meetingRoom where userGroup.buildingCode = 
meetingRoom.buildingCode),
(select count(*) from restroom where userGroup.buildingCode = 
restroom.buildingCode)
from user
join userGroup
on user.userGroupId = userGroup.userGroupId

I've tried using subqueries:
meetingRoomCount 
= session.query(func.count(MeetingRoom.id)).join(UserGroup, 
MeeingRoom.buildingId == UserGroup.buildingId).subquery()
bathroomCount = session.query(func.count(Bathroom.id)).join(UserGroup, 
Bathroom.buildingId == UserGroup.buildingId).subquery()
session.query(User.id, meetingRoomCount, bathroomCount).first()

But this returns the total number of meeting rooms and bathrooms in the 
database, not the ones that are specific to that user.
I feel like I'm missing something simple here, anyone have any ideas?

-Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Getting the entity name, ideally the ORM class in the shard selector?

2013-04-23 Thread Rob Fowler
 

So anyone else who asks this knows, the following works:

A query with just a column:

qq = session.query(Configuration.name)

Normally I would loop, but in this case, taking just the first column:

yy = qq.column_descriptions[0]['expr']

yy.expression.table.name

gives: ''configurations”


 Configuration.__table__.name

gives: ''configurations”


 All good, so we can compare on the underlying table names.


 Sorry to keep asking questions. If we wanted to do this using the ORM 
objects, is there a way to get the 'Configuration' object to compare to and 
not compare the strings with the underlying tables?


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Getting the entity name, ideally the ORM class in the shard selector?

2013-04-22 Thread Rob Fowler
If the query does not have a table in it we don't get the tables.
For example, the following simple query gets a list of connection_strings 
column from the Connection table:

aa = session.query(Connection.connection_string)
aa.column_descriptions

[{'aliased': False,
  'expr': sqlalchemy.orm.attributes.InstrumentedAttribute at 0x10eb7f590,
  'name': 'connection_string',
  'type': String(length=100)}]

Exploring into the expr I am not sure how to get the table from that.

On Tuesday, 23 April 2013 02:32:15 UTC+10, Michael Bayer wrote:



 use column_descriptions: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/query.html#sqlalchemy.orm.query.Query.column_descriptions





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Getting the entity name, ideally the ORM class in the shard selector?

2013-04-21 Thread Rob Fowler
 

We are using the sharding module included in our application. 


Currently we are selecting shards based on field. This works really well, 
as the field that gets hashed to select the shard needs to be migrated to 
the other tables that exist only in that shard.


 Now we would like to have some static entities in all databases we would 
like to use the table name to indicated that the table is available in all 
shards. Ideally we would define a parent class 'unsharded' and derive from 
that class for all the unsharded entities.


 I can see the current entities are in _entities, but as that begins with 
an underscore I don't think I am meant to be using it. 


 What should I use to get the entities from the query?


ps. Loving sqlalchemy and just started setting up Alembic for our project 
(flask, sqlalchemy, postgres).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Better approach to filtering a relationship collection than using contains_eager()?

2013-02-27 Thread Rob Crowell
Sure!  Here's the query I am attempting to replicate:

SELECT people.id AS person_id, people.name, towns.id AS town_id, towns.name
FROM people
INNER JOIN visited_destinations ON visited_destinations.person_id = 
people.id
INNER JOIN towns ON towns.id = visited_destinations.town_id
WHERE towns.name IN ('Atlanta', 'Memphis')

I realize it's confusing since I labeled 2 people as Sam in my test 
dataset, but I left it like that for consistency.  You can see that one of 
the Sam's has person_id=9 and the other has person_id=10 from the MySQL 
results below:

+---+--+-+-+
| person_id | name | town_id | name|
+---+--+-+-+
| 8 | Bob  |   2 | Atlanta |
| 8 | Bob  |   1 | Memphis |
| 9 | Sam  |   1 | Memphis |
|10 | Sam  |   2 | Atlanta |
|10 | Sam  |   2 | Atlanta |
|10 | Sam  |   2 | Atlanta |
+---+--+-+-+

I'd like to turn this into 3 Person results, like this:
Person(id=8, name=Bob, visited_towns=[Town(name=Atlanta), 
Town(name=Memphis)])
Person(id=9, name=Sam, visited_towns=[Town(Memphis)])
Person(id=10, name=Sam, visited_towns=[Town(Atlanta), 
Town(Atlanta), Town(Atlanta)])

On Wednesday, February 27, 2013 12:59:02 PM UTC-5, Michael Bayer wrote:

 I'm not yet digging into your problem, but one remark would be that 
 there's two levels to deal with here. One is figuring out exactly what SQL 
 you want, independent of SQLAlchemy.  It's not clear here if you've gotten 
 that part yet.  The next part is getting parts of that SQL to route into 
 your contains_eager().   We do that second.

 So let me know if you know the actual SQL you want to do first; we'd work 
 from there.   Don't deal with joinedload or contains_eager or any of that 
 yet.


 On Feb 27, 2013, at 2:07 AM, Rob Crowell robcc...@gmail.com javascript: 
 wrote:

 Example code: https://gist.github.com/rcrowell/5045832

 I have Person and Town tables, which are joined in a many-to-many fashion 
 through a VisitedDestinations table.  I want to write a query which will 
 return People that have visited either Atlanta or Memphis.  I have a 
 working example using contains_eager below, but I'm not sure if there is a 
 better way...

 I am trying to get a Person object for each person that has visited at 
 least one of these two cities, and I want to get joined Town objects for 
 Atlanta and Memphis.  If a person has visited one of these towns more than 
 once, I'd like to get back one Town object for each visit (so 3 visits to 
 Atlanta produces a visited_towns collection of size three):

 class Town(Base):
 __tablename__ = 'towns'
 id = Column('id', Integer, primary_key=True)
 name = Column('name', String(256))

 class Person(Base):
 __tablename__ = 'people'
 id = Column('id', Integer, primary_key=True)
 name = Column('name', String(256))

 class VisitedDestinations(Base):
 __tablename__ = 'visited_destinations'
 id = Column('id', Integer, primary_key=True)
 person_id = Column('person_id', Integer, ForeignKey(Person.id))
 town_id = Column('town_id', Integer, ForeignKey(Town.id))

 person = relationship(Person, backref='visited_destinations')
 town = relationship(Town, backref='visited_destinations')

 # use an association_proxy so client code does not have to deal with 
 the visited_destinations table at all
 Person.visited_towns = association_proxy('visited_destinations', 
 'town')

 This code more or less does what I would like, but it uses an EXISTS query 
 which I don't really want and it gets back ALL towns that a matching person 
 has visited instead of only the matching towns:

 # gets all Town objects, including those that do not match our filter 

 q = session.query(Person) 
 
 q = q.filter(Person.visited_towns.any(Town.name.in_(['Atlanta', 
 'Memphis']))) 
 q = q.options(joinedload_all(Person.visited_destinations, 
 VisitedDestinations.town))   # can't do joinedload with association_proxy 
 objects 
 for person in q:   

 print person, person.visited_towns 

 Which produces:
 Person(name='Bob') [Town(name='Atlanta'), Town(name='Memphis')]
 Person(name='Sam') [Town(name='Memphis')]
 Person(name='Sam') [Town(name='Chattanooga'), Town(name='Atlanta'), 
 Town(name='Atlanta'), Town(name='Atlanta')]


 In my database its likely that a person has visited thousands of 
 destinations, and I really don't need to get all of them back here.  As you

Re: [sqlalchemy] Better approach to filtering a relationship collection than using contains_eager()?

2013-02-27 Thread Rob Crowell
Ah okay, so you do recommend the contains_eager approach.  I guess this is 
exactly the use-case it is designed for?  I always get a little scared when 
I try using advanced features of SQLAlchemy :)

One last question.  The query here seems to take advantage of the fact that 
our table joins on Towns exactly once.  If we had a second table 
WishlistDestinations, that tracked the towns that a Person would like to 
visit instead of ones he had already visited, what would be the syntax for 
filtering those out?

Imagine we also add this model:

class WishlistDestinations(Base):
__tablename__ = 'wishlist_destinations'
id = Column('id', Integer, primary_key=True)
person_id = Column('person_id', Integer, ForeignKey(Person.id))
town_id = Column('town_id', Integer, ForeignKey(Town.id))

person = relationship(Person, backref='wishlist_destinations')
town = relationship(Town, backref='wishlist_destinations')

Person.wishlist_towns = association_proxy('wishlist_destinations', 
'town')


This query is obviously going to fail, since there are now 2 relationships 
to the Town model:

q = session.query(Person)
q = q.join(Person.visited_destinations, VisitedDestinations.town, 
WishlistDestinations.town)
q = q.filter(Town.name.in_(['Atlanta', 'Memphis']))
q = q.options(contains_eager(Person.visited_destinations, 
VisitedDestinations.town))

How could I filter by users that have visited Atlanta or Memphis, that also 
want to visit Boston?  The code below fails and I'm not sure how to write 
it correctly, here's my first guess:

q = q.filter(VisitedDestinations.town.name.in_(['Atlanta', 'Memphis']))
q = q.filter(WishlistDestinations.town.name.in_(['Boston']))

AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' 
object associated with VisitedDestinations.town has an attribute 'name'


My second guess also fails (I don't think I want to write an EXISTS query 
in the first place):
q = q.filter(Person.visited_towns.any(Town.name.in_(['Atlanta', 
'Memphis'])))
q = q.filter(Person.wishlist_towns.any(Town.name.in_(['Boston'])))

sqlalchemy.exc.OperationalError: (OperationalError) (1066, Not unique 
table/alias: 'towns')...


What's the correct syntax in this case?


On Wednesday, February 27, 2013 2:08:47 PM UTC-5, Michael Bayer wrote:

 oh.  I saw you talking about at least one and exists and thought you 
 had a more complex query.contains_eager() doesn't impact what's 
 queried, only how results are used with the resulting objects, and is 
 usually used with join(), just like this:

 session.query(Person).\
 join(Person.visited_destinations, VisitedDestinations.town).\
 options(contains_eager(Person.visited_destinations, 
 VisitedDestinations.town)).\
 filter(Town.name.in_(['Atlanta', 'Memphis']))


 On Feb 27, 2013, at 1:48 PM, Rob Crowell rob.c...@moat.com javascript: 
 wrote:

 Sure!  Here's the query I am attempting to replicate:

 SELECT people.id AS person_id, people.name, towns.id AS town_id, 
 towns.name
 FROM people
 INNER JOIN visited_destinations ON visited_destinations.person_id = 
 people.id
 INNER JOIN towns ON towns.id = visited_destinations.town_id
 WHERE towns.name IN ('Atlanta', 'Memphis')

 I realize it's confusing since I labeled 2 people as Sam in my test 
 dataset, but I left it like that for consistency.  You can see that one of 
 the Sam's has person_id=9 and the other has person_id=10 from the MySQL 
 results below:

 +---+--+-+-+
 | person_id | name | town_id | name|
 +---+--+-+-+
 | 8 | Bob  |   2 | Atlanta |
 | 8 | Bob  |   1 | Memphis |
 | 9 | Sam  |   1 | Memphis |
 |10 | Sam  |   2 | Atlanta |
 |10 | Sam  |   2 | Atlanta |
 |10 | Sam  |   2 | Atlanta |
 +---+--+-+-+

 I'd like to turn this into 3 Person results, like this:
 Person(id=8, name=Bob, visited_towns=[Town(name=Atlanta), 
 Town(name=Memphis)])
 Person(id=9, name=Sam, visited_towns=[Town(Memphis)])
 Person(id=10, name=Sam, visited_towns=[Town(Atlanta), 
 Town(Atlanta), Town(Atlanta)])

 On Wednesday, February 27, 2013 12:59:02 PM UTC-5, Michael Bayer wrote:

 I'm not yet digging into your problem, but one remark would be that 
 there's two levels to deal with here. One is figuring out exactly what SQL 
 you want, independent of SQLAlchemy.  It's not clear here if you've gotten 
 that part yet.  The next part is getting parts of that SQL to route into 
 your contains_eager().   We do that second.

 So let me know if you know the actual SQL you want to do first; we'd work 
 from there.   Don't deal with joinedload or contains_eager or any of that 
 yet.


 On Feb 27, 2013, at 2:07 AM, Rob Crowell robcc...@gmail.com wrote:

 Example code: https://gist.github.com/rcrowell/5045832

 I have Person and Town tables, which are joined in a many-to-many

Re: [sqlalchemy] Better approach to filtering a relationship collection than using contains_eager()?

2013-02-27 Thread Rob Crowell
Oh cool!  I was getting incorrect results putting VisitedDestinations.town 
in my contains_eager() call as you suggested, maybe I am doing something 
wrong:

session = Session()
visited_alias = aliased(Town)
wishlist_alias = aliased(Town)
q = session.query(Person)
q = q.join(Person.visited_destinations).join(visited_alias, 
VisitedDestinations.town).filter(visited_alias.name.in_(['Atlanta', 
'Memphis']))
q = q.join(Person.wishlist_destinations).join(wishlist_alias, 
WishlistDestinations.town).filter(wishlist_alias.name.in_(['Boston']))
q = q.options(contains_eager(Person.visited_destinations, 
VisitedDestinations.town),
  contains_eager(Person.wishlist_destinations, 
WishlistDestinations.town))

SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'WishlistDestinations.town' 
SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'VisitedDestinations.town' 

However this seems to take care of the warning and fixes the problem with 
my results:

session = Session()
visited_alias = aliased(Town)
wishlist_alias = aliased(Town)
q = session.query(Person)
q = q.join(Person.visited_destinations).join(visited_alias, 
VisitedDestinations.town).filter(visited_alias.name.in_(['Atlanta', 
'Memphis']))
q = q.join(Person.wishlist_destinations).join(wishlist_alias, 
WishlistDestinations.town).filter(wishlist_alias.name.in_(['Boston']))
q = q.options(contains_eager(Person.visited_destinations), 
contains_eager(Person.wishlist_destinations))

Any idea what is going wrong in the first case?

And THANK YOU for your help Michael.  You are really helpful :)



On Wednesday, February 27, 2013 2:48:21 PM UTC-5, Michael Bayer wrote:


 On Feb 27, 2013, at 2:40 PM, Rob Crowell rob.c...@moat.com javascript: 
 wrote:

 Ah okay, so you do recommend the contains_eager approach.  I guess this is 
 exactly the use-case it is designed for?  I always get a little scared when 
 I try using advanced features of SQLAlchemy :)

 One last question.  The query here seems to take advantage of the fact 
 that our table joins on Towns exactly once.  If we had a second table 
 WishlistDestinations, that tracked the towns that a Person would like to 
 visit instead of ones he had already visited, what would be the syntax for 
 filtering those out?

 Imagine we also add this model:

 class WishlistDestinations(Base):
 __tablename__ = 'wishlist_destinations'
 id = Column('id', Integer, primary_key=True)
 person_id = Column('person_id', Integer, ForeignKey(Person.id))
 town_id = Column('town_id', Integer, ForeignKey(Town.id))

 person = relationship(Person, backref='wishlist_destinations')
 town = relationship(Town, backref='wishlist_destinations')

 Person.wishlist_towns = association_proxy('wishlist_destinations', 
 'town')


 This query is obviously going to fail, since there are now 2 relationships 
 to the Town model:

 q = session.query(Person)
 q = q.join(Person.visited_destinations, VisitedDestinations.town, 
 WishlistDestinations.town)
 q = q.filter(Town.name.in_(['Atlanta', 'Memphis']))
 q = q.options(contains_eager(Person.visited_destinations, 
 VisitedDestinations.town))

 How could I filter by users that have visited Atlanta or Memphis, that 
 also want to visit Boston?  The code below fails and I'm not sure how to 
 write it correctly, here's my first guess:

 q = q.filter(VisitedDestinations.town.name.in_(['Atlanta', 'Memphis']))
 q = q.filter(WishlistDestinations.town.name.in_(['Boston']))

 AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' 
 object associated with VisitedDestinations.town has an attribute 'name'


 there's no implicit join available when you attempt to say something 
 like SomeClass.relationship1.relationship2, you always have to spell out a 
 join() explicitly, so if you want to join to Wishlist also that's separate. 
  But here you want to hit Town twice, so you also need to alias it:
  
 talias = aliased(Town)

 q = q.join(Person.wishlist_destinations).join(talias, 
 WishlistDest.town).filter(talias.name == 'Boston')

 its just like SQL !  all the same rules.
  





 My second guess also fails (I don't think I want to write an EXISTS query 
 in the first place):
 q = q.filter(Person.visited_towns.any(Town.name.in_(['Atlanta', 
 'Memphis'])))
 q = q.filter(Person.wishlist_towns.any(Town.name.in_(['Boston'])))

 sqlalchemy.exc.OperationalError: (OperationalError) (1066, Not unique 
 table/alias: 'towns')...


 What's the correct syntax in this case?


 On Wednesday, February 27, 2013 2:08:47 PM UTC-5, Michael Bayer wrote:

 oh.  I saw you talking about at least one and exists and thought you 
 had a more complex query.contains_eager() doesn't impact what's 
 queried, only how results are used with the resulting objects, and is 
 usually used with join(), just like

[sqlalchemy] Difference between ZopeTransactionExtension(keep_session=True) and sessionmaker(expire_on_commit=False)?

2013-02-19 Thread rob . crowell
I'm building a pyramid application using pyramid_tm and 
ZopeTransactionExtension.  We've written a little subscriber on NewResponse 
that writes out some values to a log file about the current user 
(request.user.id) after each request.  For anybody that knows pyramid 
pretty well, we set the request.user property 
using config.set_request_property(get_user, 'user', reify=True), but 
basically we look up the User object once in our db and then cache it as 
request.user for the lifetime of the request.

For the most part this is working fine, except for in the case that the 
User object gets modified during the request (change their avatar, name, 
password, whatever).  When this happens, we get a Detached Instance 
exception when we try to reference the 'id' field of request.user.  It's 
worth pointing out that pyramid_tm runs and commits our transaction before 
our NewResponse subscriber fires, which means that request.user has already 
been flushed and committed to the database and appears to be detached (in 
fact, we can merge it back into our session and continue using it as 
normal, see Solution #3 below).

We've found 3 work-arounds that seem to give us the desired behavior, but 
I'm not really sure which one is better.

Solution 1
--
# setting up the session
DBSession = scoped_session(sessionmaker(bind=engine, 
extension=ZopeTransactionExtension(), expire_on_commit=False))

# in the subscriber
print request.user.id   # does 0 additional db queries

Solution 2
--
# setting up the session
DBSession = scoped_session(sessionmaker(bind=engine, 
extension=ZopeTransactionExtension(keep_session=True)))

# in the subscriber
print request.user.id   # does an additional SELECT query


Solution 3
--
# setting up the session
DBSession = scoped_session(sessionmaker(bind=engine, 
extension=ZopeTransactionExtension()))

# in the subscriber
session = DBSession()
user = session.merge(request.user)   # does an additional SELECT query
print user.id


Without using any of these solutions, we sure enough get the Exception:
DetachedInstanceError: Instance User at 0x26d5990 is not bound to a 
Session; attribute refresh operation cannot proceed


In this case, it seems Solution 1 is the best as it doesn't do any 
additional SQL queries (and I'm willing to accept that occasionally I might 
be writing a stale User object to disk), but is there any other downside to 
this approach?  After my subscriber runs the web request is finished, so 
theoretically it doesn't matter that the objects are not expired, correct? 
 Is there a better approach here that I am missing?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Complicated filter clause causes recursion depth exceeded exception

2013-01-22 Thread rob . crowell
Thanks Michael,

Writing a big list of conditions and combining them with and_(*conditions) 
worked well.  I was indeed querying like this before:

for condition in conditions:
  q = q.filter(condition)
print q  

On Friday, January 18, 2013 6:00:04 PM UTC-5, Michael Bayer wrote:


 On Jan 18, 2013, at 4:15 PM, rob.c...@moat.com javascript: wrote:

 I haven't boiled this down to a short test case yet, but when my WHERE 
 clause gets especially long I start getting the recursion depth exceeded 
 exception.  Is this a well-known limitation of sqlalchemy?  We're running 
 this query in production currently without SQLAlchemy, and it performs 
 fine, but perhaps I need to look for another approach...

 If I keep the filter condition relatively short, my query looks like this 
 and runs fine (with fake columns start_date, X, Y, and Z on table T):

 SELECT X, sum(Z) AS Z
 FROM T
 WHERE T.start_date = :start_date_1 
   AND T.start_date = :start_date_2 
   AND NOT (T.X = :X_1 AND T.Y = :Y_1) 
   AND NOT (T.X = :X_2 AND T.Y = :Y_2)
   AND NOT (T.X = :X_3 AND T.Y = :Y_3)
 GROUP BY T.X

 However, if I make the filter() clause very long (over 150 AND NOT... 
 clauses), I start getting exceptions with this stack trace:


 Always amazing how many wacky new problems come around.   Well, the 
 compilation of these clauses is pretty straightforward, using a recursive 
 traversal scheme.  So if you give Python a tree structure of more than 1000 
 nodes deep and do such a traversal, this is the error you'd get, and I 
 suppose it's sort of well known, depends on what perspective you're 
 coming from.

 So this indicates you're creating a structure that is nested this deeply. 
  Which is to say, really deep !   

 This could happen if you're doing the AND's using a nesting pattern of one 
 at a time like this:

 from sqlalchemy.sql import column

 root = column('x') == 5
 current = root

 for i in xrange(200):
 current = current  (column('x') == 5)

 print current


 because that's really and_(expr, and_(expr, and_(expr, and_( for 200 
 times... ))).

 But if you flatten out the and_() you can get this:

 from sqlalchemy.sql import column, and_

 expr = [column('x') == 5]
 for i in xrange(200):
 expr.append(column('x') == 5)

 expr = and_(*expr)

 print expr

 then you have a flat structure, and you're fine.

 So we could modify our and_()/or_ construct to open itself up this way, 
 that is, as it's built, it flattens out the nesting, though maybe for now 
 there's a way you can build up using one big and_() block.

 In fact to flatten out the nesting is something you could enable across 
 the board here, and you can see why I'm hesitant to build this in by 
 default as it adds lots of isinstance() and other expensive checks, but you 
 can add this to your app as a quick fix (just run this anywhere at import 
 time to redefine how and_() and or_() are rendered):

 from sqlalchemy.ext.compiler import compiles
 from sqlalchemy.sql.expression import BooleanClauseList

 @compiles(BooleanClauseList)
 def flatten_boolean_clause_list(clauselist, compiler, **kw):
 op = clauselist.operator
 flattened = []
 rewrite = False
 stack = list(clauselist.clauses)
 while stack:
 elem = stack.pop(0)
 if isinstance(elem, BooleanClauseList) and elem.operator is op:
 stack[:0] = elem.clauses
 rewrite = True
 else:
 flattened.append(elem)
 if rewrite:
 clauselist = BooleanClauseList(operator=op, *flattened)
 return compiler.visit_clauselist(clauselist, **kw)

 then the original test passes because we've rewritten the nested list as a 
 flat list.   Basically the recursion is replaced by the stack based 
 traversal we do here.

 or even quicker, you could just increase your recursion depth.  It 
 defaults to 1000, so here's 1, do this before you try to run the SQL:

 import sys
 sys.setrecursionlimit(1)











   File test.py, line 350, in do_test
 print q
   File /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, 
 line 3031, in __str__
 return str(self._compile_context().statement)
   File 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
 1790, in __str__
 return unicode(self.compile()).encode('ascii', 'backslashreplace')
   File 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
 1778, in compile
 return self._compiler(dialect, bind=bind, **kw)
   File 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
 1784, in _compiler
 return dialect.statement_compiler(dialect, self, **kw)
   File 
 /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, line 
 277, in __init__
 engine.Compiled.__init__(self, dialect, statement, **kwargs)
   File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
 line 705, in __init__
 self.string = self.process(self.statement)
   File 

[sqlalchemy] Complicated filter clause causes recursion depth exceeded exception

2013-01-18 Thread rob . crowell
I haven't boiled this down to a short test case yet, but when my WHERE 
clause gets especially long I start getting the recursion depth exceeded 
exception.  Is this a well-known limitation of sqlalchemy?  We're running 
this query in production currently without SQLAlchemy, and it performs 
fine, but perhaps I need to look for another approach...

If I keep the filter condition relatively short, my query looks like this 
and runs fine (with fake columns start_date, X, Y, and Z on table T):

SELECT X, sum(Z) AS Z
FROM T
WHERE T.start_date = :start_date_1 
  AND T.start_date = :start_date_2 
  AND NOT (T.X = :X_1 AND T.Y = :Y_1) 
  AND NOT (T.X = :X_2 AND T.Y = :Y_2)
  AND NOT (T.X = :X_3 AND T.Y = :Y_3)
GROUP BY T.X

However, if I make the filter() clause very long (over 150 AND NOT... 
clauses), I start getting exceptions with this stack trace:

  File test.py, line 350, in do_test
print q
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, 
line 3031, in __str__
return str(self._compile_context().statement)
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
1790, in __str__
return unicode(self.compile()).encode('ascii', 'backslashreplace')
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
1778, in compile
return self._compiler(dialect, bind=bind, **kw)
  File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/expression.py, line 
1784, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 277, in __init__
engine.Compiled.__init__(self, dialect, statement, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 705, in __init__
self.string = self.process(self.statement)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
line 724, in process
return obj._compiler_dispatch(self, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 941, in visit_select
t = select._whereclause._compiler_dispatch(self, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in visit_clauselist
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 475, in genexpr
s for s in 
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in genexpr
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in visit_clauselist
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 475, in genexpr
s for s in 
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in genexpr
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in visit_clauselist
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 475, in genexpr
s for s in 
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in genexpr
for c in clauselist.clauses)
...
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 475, in genexpr
s for s in 
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in genexpr
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in visit_clauselist
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 475, in genexpr
s for s in 
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/compiler.py, 
line 477, in genexpr
for c in clauselist.clauses)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.py, 
line 72, in _compiler_dispatch
return getter(visitor)(self, **kw)
RuntimeError: maximum recursion depth exceeded

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 

[sqlalchemy] Re: Is it possible to define multiple model objects that refer to the same table?

2012-11-21 Thread Rob Crowell
Thanks so much!  Your pointers were exactly what I needed,
specifically the bit which led me to discover exclude_properties.
I'll leave my working code here in case it ever helps anybody else
out:

from sqlalchemy import Column, Date, Enum, Integer, String, Table,
create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import ColumnProperty, sessionmaker
from sqlalchemy.orm.mapper import class_mapper

engine = create_engine('mysql://user:password@localhost/
issue_tracker')
Session = sessionmaker(bind=engine)

Base = declarative_base()

tables = {'issue_type':
  Table('issue_type', Base.metadata,
Column('id', Integer, primary_key=True),
Column('type', Enum('Bug', 'Feature',
'Improvement', 'One-Time Task')),
Column('created', Date),
Column('num_visits', Integer)),

  'issue_type_label':
  Table('issue_type_label', Base.metadata,
Column('id', Integer, primary_key=True),
Column('type', Enum('Bug', 'Feature',
'Improvement', 'One-Time Task')),
Column('label_id', String),
Column('created', Date),
Column('num_visits', Integer))}

def get_columns(model):
return [x.key for x in class_mapper(model).iterate_properties if
isinstance(x, ColumnProperty)]

class IssueType(Base):
__table__ = tables['issue_type']


class IssueLabel(Base):
__table__ = tables['issue_type_label']
__mapper_args__ = {'exclude_properties': ['type']}


class IssueTypeLabel(Base):
__table__ = tables['issue_type_label']


print issue type:, get_columns(IssueType)
print issue label:, get_columns(IssueLabel)
print issue type label:, get_columns(IssueTypeLabel)


This code correctly prints the following:
issue type: ['id', 'type', 'created', 'num_visits']
issue label: ['id', 'label_id', 'created', 'num_visits']
issue type label: ['id', 'type', 'label_id', 'created',
'num_visits']

On Nov 16, 8:49 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 16, 2012, at 2:59 PM, Rob Crowell wrote:











  Thanks for the help so far Michael!  I can explain a little more about
  what I'm trying to do (I'm using a fictional application here but I
  think it pretty accurately translates into my actual application).

  BACKGROUND
  --

  Let's say I'm writing an issue tracking application.  Each issue that
  we're tracking has a type (an issue must have exactly one type), and
  each issue may have an unlimited number of user-provided labels.

  Each day, people browse our issue tracker and each time they do they
  generate a page view on the issue.  Here's an example of one day's
  worth of data:

     IssueA: Views: 1, Type: Bug, Labels: [login-page (id=1),
  show-stopper (id=2)]
     IssueB: Views: 20, Type: One-Time Task, Labels: [disk-full
  (id=3), show-stopper (id=2)]
     IssueC: Views: 300, Type: Bug, Labels: [login-page (id=1)]

  The BigCo. I'm working for is very interested in knowing which issues
  are read by the most people, and they need the ability to generate
  reports sliced by arbitrary date ranges.  However, we can tolerate a
  day delay, so we are writing summary tables each night.  Two of these
  summary tables are aggregated by either issue type or label, and we
  also write a third table that can be used to drill-down and see page
  visits bucketed by both type and label:

  CREATE TABLE `issue_type` (
   `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
   `created` datetime NOT NULL,
   `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
   `num_watchers` int(10) unsigned NOT NULL,
   PRIMARY KEY (`id`)
  )

  CREATE TABLE `issue_label` (
   `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
   `created` datetime NOT NULL,
   `label_id` int(10) unsigned NOT NULL,
   `num_watchers` int(10) unsigned NOT NULL,
   PRIMARY KEY (`id`)
  )

  CREATE TABLE `issue_type_label` (
   `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
   `created` datetime NOT NULL,
   `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
   `label_id` int(10) unsigned NOT NULL,
   `num_visits` int(10) unsigned NOT NULL,
   PRIMARY KEY (`id`)
  )

  So we'd run these insert statements at midnight:

  INSERT INTO issue_type (created, type, num_visits) VALUES
     (2012-11-15, Bug, 301),
     (2012-11-15, One-Time Task, 20);

  INSERT INTO issue_labels (created, label_id, num_visits) VALUES
     (2012-11-15, 1, 301),
     (2012-11-15, 2, 21),
     (2012-11-15, 3, 20);

  INSERT INTO issue_type_label (created, type, label_id, num_visits)
  VALUES
     (2012-11-15, Bug, 1, 301),
     (2012-11-15, Bug, 2, 1),
     (2012-11-15, One-Time Task, 3, 20),
     (2012-11-15, One-Time Task, 2, 20);

  Now when we want to generate the summary reports, we query one of the
  first two tables (if we're generating a report aggregated by issue
  type we hit issue_type, if we're

[sqlalchemy] Re: Is it possible to define multiple model objects that refer to the same table?

2012-11-16 Thread Rob Crowell
On Nov 15, 10:48 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 15, 2012, at 7:33 PM, Rob Crowell wrote:









  Sorry, that got cut off at the end.

  class IssueTag(Base):
      __tablename__ = 'issue_user_tag'

  sqlalchemy.exc.InvalidRequestError: Table 'issue_user_tag' is already 
  defined for this MetaData instance.  Specify 'extend_existing=True' to 
  redefine options and columns on an existing Table object.

  On Thursday, November 15, 2012 7:32:29 PM UTC-5, Rob Crowell wrote:
  I'm working with a denormalized cache schema, and I've run into a situation 
  where it would be helpful to be able to create multiple classes that extend 
  Base but refer to the same __tablename__.  Is this possible to do?  I am 
  getting this Exception:
      sqlalchemy.exc.InvalidRequestError: Table '[tablename]' is already 
  defined for this MetaData instance.  Specify 'extend_existing=True' to 
  redefine options and columns on an existing Table object.

  For a little more insight, we have some attributes that always have exactly 
  one value (user who created the issue), and other attributes that can have 
  1 or more values (user-defined tags for the issue).  If we were being 
  exhaustive, we would create two cached tables for our issues since 
  sometimes we want to display recent issues sometimes by user and sometimes 
  by tag:
      * issue_user
      * issue_tag

  However, we can get away with writing just one table and querying it with 
  an appropriate group_by(user_id) to achieve the same end as having 2 
  tables.  Since my application should behave as if there were 2 separate 
  cache tables (and I'd like to keep open the option of adding two separate 
  cache tables in the future), I would like to have 2 different Base classes 
  representing the two ways in which we would query the table.  The obvious 
  way of doing this doesn't work:

  class IssueUser(Base):
      __tablename__ = 'issue_user_tag'

  class IssueTag(Base):

  --

 two^H^H^H three ways:

 1. map to a Table:

 mytable = Table(mytable, Base.metadata, Column(...))

 class A(Base):
     __table__ = mytable

 class B(Base):
     __table__ = mytable

 1a: variant of 1, map A as you did but use __table__ on B

 class A(Base):
     __tablename__ = 'mytable'

     x = Column(...)

 class B(Base):
     __table__ = A.__table__

 2. use single table inheritance with no discriminator

 class MyTable(Base):
     __tablename__ = 'mytable'

 class A(MyTable):
    # 

 class B(MyTable):
    # ...

 I don't have an understanding of your querying situation yet, discriminating 
 on group_by() seems a little strange as group_by() is only intended to be 
 used to group for aggregates, but #1, #1a or #2 should fit the bill.


Thanks for the help so far Michael!  I can explain a little more about
what I'm trying to do (I'm using a fictional application here but I
think it pretty accurately translates into my actual application).

BACKGROUND
--

Let's say I'm writing an issue tracking application.  Each issue that
we're tracking has a type (an issue must have exactly one type), and
each issue may have an unlimited number of user-provided labels.

Each day, people browse our issue tracker and each time they do they
generate a page view on the issue.  Here's an example of one day's
worth of data:

IssueA: Views: 1, Type: Bug, Labels: [login-page (id=1),
show-stopper (id=2)]
IssueB: Views: 20, Type: One-Time Task, Labels: [disk-full
(id=3), show-stopper (id=2)]
IssueC: Views: 300, Type: Bug, Labels: [login-page (id=1)]

The BigCo. I'm working for is very interested in knowing which issues
are read by the most people, and they need the ability to generate
reports sliced by arbitrary date ranges.  However, we can tolerate a
day delay, so we are writing summary tables each night.  Two of these
summary tables are aggregated by either issue type or label, and we
also write a third table that can be used to drill-down and see page
visits bucketed by both type and label:

CREATE TABLE `issue_type` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

CREATE TABLE `issue_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

CREATE TABLE `issue_type_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_visits` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

So we'd run these insert statements at midnight:

INSERT INTO issue_type (created, type, num_visits) VALUES
(2012-11-15, Bug, 301),
(2012-11-15, One-Time Task, 20);

INSERT INTO issue_labels (created, label_id

Re: [sqlalchemy] Single Table Inheritance with mult-column keys

2012-08-24 Thread Rob
Hi Michael,

That does exactly what I was after (and I've learned a little bit more 
about sqalchemy!)

Thank you very much for your help.




On Thursday, 23 August 2012 19:13:27 UTC+1, Michael Bayer wrote:


 On Aug 23, 2012, at 10:04 AM, Rob wrote:

 Hi Michael,

 I have a similar (but subtly different) problem to this, trying to mix 
 single- and joined-table inheritance.
 Essentially my model looks as follows:

 Product(Base)
 PhysicalProduct(Product)   
 NonPhysicalProduct(Product)

 The Physical/NonPhysicalProduct use single table inheritance whilst 
 objects inheriting from them use joined tables...  

 I have a fully working model --- and there's no question that it works!! 
  But I can't help feeling that I've missed something relating to the 
 __mapper_args__ which is then requiring explicit calls to __init__ objects 
 higher up the tree.  (rather than bunging up this message, please see the 
 attached file) 

 I'd be really grateful if you could take a look and hopefully point me in 
 the right direction.



 this model wouldn't work fully, as if you did query(Product), the 
 discriminator would only be the product_type column.  p_discr and 
 np_discr would be ignored.   The polymorphic load only checks for one key 
 in one map, and that map is always shared among all classes in the 
 hierarchy starting at Product.

 I'm going to call this pattern cascading polymorphic ons for now, I 
 guess.   The attached file will get you there,  but requires two techniques 
 that are probably outside of the scope of what a the average SQLA user 
 could be expected to come up with on his or her own.  To get SQLA to do 
 these kinds of things automatically should be possible, most likely by just 
 having the match polymorphic_on to class phase during loading continue 
 it's process several times.   The concept is not too hard but there's edges 
 to it which may make it a more involved enhancement, this is 
 http://www.sqlalchemy.org/trac/ticket/2555.

 For now here's the workaround version:

 
 mixed single and joined table inheritance.
 

 from sqlalchemy import *
 from sqlalchemy import types
 from sqlalchemy.orm import *
 from sqlalchemy.ext.declarative import declarative_base,  declared_attr
 from sqlalchemy import event

 Base = declarative_base()

 class Product(Base):
 __tablename__ = 'products'
 id = Column(types.Integer, primary_key=True)
 discriminator = Column('product_type', types.String(50), 
 nullable=False)

 _discriminator = discriminator

 def price_history(self):
 return []

 class PhysicalProduct(Product):
 p_discr = Column(types.String(50))

 _discriminator = p_discr

 @declared_attr
 def __mapper_args__(cls):
 return {'polymorphic_identity': 'physical_product'}

 def inventory(self):
 return computed inventory

 class NonPhysicalProduct(Product):
 np_discr = Column(types.String(50))

 _discriminator = np_discr

 @declared_attr
 def __mapper_args__(cls):
 return {'polymorphic_identity': 'nonphysical_product'}

 def somefunc(self):
 return someval

 # set polymorphic on as a coalesce of those three
 # columns.  It's after the fact beacuse p_discr and np_discr
 # are defined after Product, but if you move them up then
 # this can be inline inside of Product.__mapper_args__.
 # this would improve loads too as it appears the p_discr/np_discr columns
 # aren't loaded directly when you query for Product
 for mp in Product.__mapper__.self_and_descendants:
 mp._set_polymorphic_on(
 func.coalesce(
 Product.__table__.c.p_discr,
 Product.__table__.c.np_discr,
 Product.__table__.c.product_type
 ))

 # build our own system of assigning polymorphic identities
 # to instances; use the 'init' event.
 # Add a print for the identity dict to see what it's doing.
 @event.listens_for(Product, init, propagate=True)
 def init(target, args, kwargs):
 identity = {}
 for cls, supercls in zip(type(target).__mro__, 
 type(target).__mro__[1:]):
 if not hasattr(supercls, '_discriminator'):
 break
 discriminator_attr = supercls._discriminator
 poly_identity = cls.__mapper__.polymorphic_identity
 identity.setdefault(discriminator_attr, poly_identity)
 for key in identity:
 setattr(target, key, identity[key])


 class Newspaper(PhysicalProduct):
 __tablename__ = 'newspapers'
 __mapper_args__ = {'polymorphic_identity': 'newspaper'}

 id = Column(types.Integer,
 ForeignKey('products.id'),
 primary_key=True
 )
 title = Column(types.String(50))

 def __init__(self, title):
 self.title = title


 class NewspaperDelivery(NonPhysicalProduct):
 __tablename__ = 'deliveries'
 __mapper_args__ = {'polymorphic_identity': 'delivery'}

 id = Column(types.Integer,
 ForeignKey

Re: [sqlalchemy] Single Table Inheritance with mult-column keys

2012-08-23 Thread Rob
Hi Michael,

I have a similar (but subtly different) problem to this, trying to mix 
single- and joined-table inheritance.
Essentially my model looks as follows:

Product(Base)
PhysicalProduct(Product)   
NonPhysicalProduct(Product)

The Physical/NonPhysicalProduct use single table inheritance whilst objects 
inheriting from them use joined tables...  

I have a fully working model --- and there's no question that it works!! 
 But I can't help feeling that I've missed something relating to the 
__mapper_args__ which is then requiring explicit calls to __init__ objects 
higher up the tree.  (rather than bunging up this message, please see the 
attached file) 

I'd be really grateful if you could take a look and hopefully point me in 
the right direction.

Many thanks,
Rob 
   





On Wednesday, 17 August 2011 00:42:28 UTC+1, Michael Bayer wrote:


 On Aug 16, 2011, at 5:37 PM, Mike Gilligan wrote:

 I have a single table that looks similar to the following:

 class Equipment(Base):
 type = Column(CHAR(1), primary_key=True)
 sub_type = Column(CHAR(1), primary_key=True)
 code = Column(CHAR(5), primary_key=True)


 For historical purposes, I cannot modify this table. I would like to setup 
 multi-level inheritance similar to this, however it does not work:

 class Equipment(Base):
 type = Column(CHAR(1), primary_key=True)
 sub_type = Column(CHAR(1), primary_key=True)
 code = Column(CHAR(5), primary_key=True)
 __mapper_args__ = {'polymorphic_on': type}


 class Vehicle(Equipment):
  __mapper_args__ = {'polymorphic_identity': 'V', 'polymorphic_on': 
 sub_type}


 class Bus(Vehicle)
  __mapper_args__ = {'polymorphic_identity': 'B'}


 class Rail(Vehicle)
  __mapper_args__ = {'polymorphic_identity': 'R'}


 I can concatenate the multiple column values into a single discriminator 
 column_property but then I do not have an easy way to retrieve all 
 vehicles. Any ideas?


 The inheritance querying does handle multi-level inheritance so if your 
 discriminator was on a concatenation of both things would work just fine, 
 i.e. if you queried for Vehicle, etc.   Each object's 
 polymorphic_identity would need to include the concatenated value, of 
 course.

 Unfortunately we're just beginning to support inheritance discriminators 
 on a column_property(), and you need to use a very specific approach to 
 make this work right now.  There's some tickets in trac to allow this 
 functionality out of the box.Attached is an example script which 
 exercises the above mapping - it uses declarative to minimize the impact of 
 the workaround.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/KzPgMan_6MIJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.


mixed single and joined table inheritance.


from sqlalchemy import *
from sqlalchemy import types
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base,  declared_attr

Base = declarative_base()

class Product(Base):
__tablename__ = 'products'
id   = Column(types.Integer, primary_key=True)
discriminator = Column('product_type', types.String(50), nullable=False)

__mapper_args__ = {'polymorphic_on': discriminator}

def price_history(self):
return []


class PhysicalProduct(Product):
p_discr = Column(types.String(50))

@declared_attr
def __mapper_args__(cls):
return {'polymorphic_on': cls.p_discr,
'polymorphic_identity' : 'physical_product'}

def __init__(self, **kw):
print init PP kwargs:, kw
self.discriminator = 'physical_product'

def inventory(self):
return computed inventory


class NonPhysicalProduct(Product):
np_discr = Column(types.String(50))

@declared_attr
def __mapper_args__(cls):
return {'polymorphic_on': cls.np_discr,
'polymorphic_identity' : 'nonphysical_product'}

def __init__(self, **kw):
print init NP kwargs:, kw
self.discriminator = 'nonphysical_product'

def somefunc(self):
return someval


class Newspaper(PhysicalProduct):
__tablename__ = 'newspapers'
__mapper_args__ = {'polymorphic_identity': 'newspaper'}

id = Column(types.Integer,
ForeignKey('products.id'),
primary_key=True
)
title = Column(types.String(50))

def __init__(self, title):
self.title = title
super(Newspaper, self).__init__()


class NewspaperDelivery(NonPhysicalProduct):
__tablename__ = 'deliveries'
__mapper_args__ = {'polymorphic_identity': 'delivery'}

id = Column(types.Integer,
ForeignKey('products.id

[sqlalchemy] [R]edesign

2011-02-01 Thread Rob
Hi all -
I'm starting a project in R to write an ORM along the lines of
SQLAlchemy (but minified). I wanted to solicit opinions on a few
subjects:
1) What is best way to start and what are the 'best' places to safely
cut corners and drop functionality?
2) Are there any components for which I'd want to consider diverging
from SQLAlchemy's approach?

Some obvious language-specific implementation issues exist, but I
anticipate those being resolvable with some thought.
My goal is to get something up and running in a few hundred hours
(leading me to question 3) is that sane?) that would necessarily be
very limited in scope: it would not support table creation and only
have a dialectical backing for MySQL. Many other components I could
leave as stubs. In any case, I want to avoid design decisions that
will limit future extensibility.

Thanks!

-- Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Reflected Tables and Session [and how to bind]

2010-02-12 Thread Rob
Hi,

I desperately hope that someone can help with this!!  I am connecting
to a remote machine (the connection works fine) but am having problems
understanding how to use/instantiate reflected tables.

My class would read as follows:

Base = declarative_base()

class ConnectLog(Base):
__table__ = Table('external_connection_log', Base.metadata,
autoload=True, autoload=engine)

However, importing [or trying to instantiate this class] will fail
until the connection is in place ie the class as it stands cannot be
called until a valid engine is bound.  I'm guessing that something
will need to be passed to an __init__ in the class above.

Is there a strategy for dealing with this?

Many thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SqlSoup and joined-table inheritance

2010-02-09 Thread Rob
Hi,

I have a question regarding the most efficient way of solving what
should be a fairly simple task.
I have (on a server) a database set up using SQLAlchemy to define a
joined-table (parent/child) inheritance structure (using a
discriminator field as per the documentation etc)

On a client machine, I want to use SqlSoup to insert data (I don't
want to replicate the object model on the client machine) however, the
only way I can see to do this is along the following lines:

parentRecord = db.parent.insert( [fields] )
db.flush()
childRecord = db.child.insert(id=parentRecord.id, [fields])
db.flush()

A flush is required (at some point) to commit the data to the table,
but for multiple inserts the method above is horribly slow.
Am I missing something fundamental?  Is there a faster/better
(possibly correct!) way to do this?

Many thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SqlSoup and joined-table inheritance

2010-02-09 Thread Rob
Thanks for your super-quick response Michael !!

I have a feeling that (especially given that there are a number of
foreign keys involved in this) I may be best off, as you suggest,
using the mappings/reflected tables.

Many thanks again,
Rob


On Feb 9, 11:17 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Rob wrote:
  Hi,

  I have a question regarding the most efficient way of solving what
  should be a fairly simple task.
  I have (on a server) a database set up using SQLAlchemy to define a
  joined-table (parent/child) inheritance structure (using a
  discriminator field as per the documentation etc)

  On a client machine, I want to use SqlSoup to insert data (I don't
  want to replicate the object model on the client machine) however, the
  only way I can see to do this is along the following lines:

  parentRecord = db.parent.insert( [fields] )
  db.flush()
  childRecord = db.child.insert(id=parentRecord.id, [fields])
  db.flush()

  A flush is required (at some point) to commit the data to the table,
  but for multiple inserts the method above is horribly slow.
  Am I missing something fundamental?  Is there a faster/better
  (possibly correct!) way to do this?

 the fastest way to insert many rows is to execute a table.insert() using
 executemany syntax.

 Above, I'm fairly certain SqlSoup can also map to a join, which is
 probably what you'd want to do here.  The inserts occur in a batch where
 it sends the id of the parent table into the child row before inserting
 that one.

 Alternatively, you could create the list of parentRecords first, do a
 single flush(), then get the list of all the child ids and populate those.

 If it were me I'd just use a simple joined-table inheritance model with
 declarative.  Using reflected tables, its barely any more typing than what
 SqlSoup requires.  SqlSoup is really just a typing saver in any case its
 still mapping classes to tables, just in a very rigid and difficult to
 customize way.





  Many thanks,
  Rob

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] multiple foreign keys to the same table

2008-08-23 Thread Rob

Hi,

I'm using sqlalchemy 0.5 beta 3 and I am trying to have a Call object
that contains two relations to a Contact object.  One is the callee
and the other is the caller.   The code is as follows:

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Table, Column, Integer, String, MetaData,
ForeignKey
from sqlalchemy.orm import relation, backref, mapper

Base = declarative_base()
metadata = Base.metadata

contact_table = Table('contact', metadata,
Column('id', Integer, primary_key=True),
Column('first_name', String(20)),
Column('last_name', String(30)))

call_table = Table('call', metadata,
Column('id', Integer, primary_key=True),
Column('subject', String(255)),
Column('callee_id', Integer, ForeignKey('contact.id')),
Column('caller_id', Integer, ForeignKey('contact.id')))

class Contact(object):
def __init__(self, first_name, last_name):
self.first_name = first_name
self.last_name = last_name

def __repr__(self):
return self.first_name + ' ' + self.last_name

mapper(Contact, contact_table)

class Call(object):
def __init__(self, subject, callee, caller):
self.subject = subject
self.callee = callee
self.caller = caller

def __repr__(self):
return self.subject

mapper(Call, call_table, properties={
'callee':relation(Call,
primaryjoin=call_table.c.callee_id==contact_table.c.id,
backref='callee_calls'),
'caller':relation(Call,
primaryjoin=call_table.c.caller_id==contact_table.c.id,
backref='caller_calls')
})

c = Contact('my_first_name', 'my_last_name')

I get a long error:
sqlalchemy.exc.ArgumentError: Could not locate any equated, locally
mapped column pairs for primaryjoin condition 'call.caller_id =
contact.id' on relation Call.caller. For more relaxed rules on join
conditions, the relation may be marked as viewonly=True.

Can someone tell me what I'm doing wrong, please?

Thanks,

Rob

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: multiple foreign keys to the same table

2008-08-23 Thread Rob

Hi,

You're right:

mapper(Call, call_table, properties={
'callee':relation(Contact,
primaryjoin=call_table.c.callee_id==contact_table.c.id,
backref='callee_calls'),
'caller':relation(Contact,
primaryjoin=call_table.c.caller_id==contact_table.c.id,
backref='caller_calls')
})

did the trick.

Thanks!

On Aug 23, 8:58 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Aug 23, 2008, at 1:56 PM, Rob wrote:





  Hi,

  I'm using sqlalchemy 0.5 beta 3 and I am trying to have a Call object
  that contains two relations to a Contact object.  One is the callee
  and the other is the caller.   The code is as follows:

  from sqlalchemy.ext.declarative import declarative_base
  from sqlalchemy import Table, Column, Integer, String, MetaData,
  ForeignKey
  from sqlalchemy.orm import relation, backref, mapper

  Base = declarative_base()
  metadata = Base.metadata

  contact_table = Table('contact', metadata,
     Column('id', Integer, primary_key=True),
     Column('first_name', String(20)),
     Column('last_name', String(30)))

  call_table = Table('call', metadata,
     Column('id', Integer, primary_key=True),
     Column('subject', String(255)),
     Column('callee_id', Integer, ForeignKey('contact.id')),
     Column('caller_id', Integer, ForeignKey('contact.id')))

  class Contact(object):
     def __init__(self, first_name, last_name):
             self.first_name = first_name
             self.last_name = last_name

     def __repr__(self):
             return self.first_name + ' ' + self.last_name

  mapper(Contact, contact_table)

  class Call(object):
     def __init__(self, subject, callee, caller):
             self.subject = subject
             self.callee = callee
             self.caller = caller

     def __repr__(self):
             return self.subject

  mapper(Call, call_table, properties={
     'callee':relation(Call,
  primaryjoin=call_table.c.callee_id==contact_table.c.id,
  backref='callee_calls'),
     'caller':relation(Call,
  primaryjoin=call_table.c.caller_id==contact_table.c.id,
  backref='caller_calls')
     })

  c = Contact('my_first_name', 'my_last_name')

  I get a long error:
  sqlalchemy.exc.ArgumentError: Could not locate any equated, locally
  mapped column pairs for primaryjoin condition 'call.caller_id =
  contact.id' on relation Call.caller. For more relaxed rules on join
  conditions, the relation may be marked as viewonly=True.

 callee and caller relate Call to another Call.  The join condition  
 given does not connect call_table  to itself and instead connects to  
 contact_table which is not involved in the relation().  Based on the  
 table it seems like callee and caller should relate to a Contact,  
 not a Call.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Modelling a complex(ish) many-to-many association with dates

2008-07-20 Thread Rob Cowie

I have a many-to-many relationship between 'Company' and 'Index',
defined in an association table (CompanyIndex). Schemas below:

Company: (companyID, companyName, ...)
Index:   (indexID, indexName)
CompanyIndex: (companyID, indexID, eventDate, eventType)

Would like to provide a function on company to return index membership
given a date i.e. company.indices(date) == index collection
Would further like the collection (returned by the company.indices
method) to create an entry in the association table on del or append
of an index object.
This would of course require the collection to be aware of existing
indices, and not write a row for existing relationship (i think there
is a pattern for this on the wiki).

Am I biting of more than sa can chew? Should I implement this in some
other, more app-specific way?

Thanks in advance,

Rob C
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Modelling a complex(ish) many-to-many association with dates

2008-07-20 Thread Rob Cowie

On reflection, I didn't pose the problem very clearly.

What I mean with regard to appending or removing index objects is that
the collection returned by company.indices(date) would - on append of
an index object - record the association by creating an
IndexAssociation object (mapped to the association table) and save it.
The result would be that appending an index to the collection would
add an entry to the association table with coid, the indexid, the
appropriate date (passed to .indices()) AND a type of 'JOIN'.

This is similar to the functionality provided by the standard sa
collections.

Regardless, I think the rest of your answer implies a custom solution
is appropriate.

Cheers,

Rob C

On Jul 20, 6:44 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jul 20, 2008, at 1:22 PM, Rob Cowie wrote:





  I have a many-to-many relationship between 'Company' and 'Index',
  defined in an association table (CompanyIndex). Schemas below:

  Company:         (companyID, companyName, ...)
  Index:               (indexID, indexName)
  CompanyIndex: (companyID, indexID, eventDate, eventType)

  Would like to provide a function on company to return index membership
  given a date i.e. company.indices(date) == index collection
  Would further like the collection (returned by the company.indices
  method) to create an entry in the association table on del or append
  of an index object.
  This would of course require the collection to be aware of existing
  indices, and not write a row for existing relationship (i think there
  is a pattern for this on the wiki).

  Am I biting of more than sa can chew? Should I implement this in some
  other, more app-specific way?

 I don't entirely follow how del or append of an index object relates  
 to create an entry in the association table.  How would the deletion  
 of an Index (deletion from the collection, or from the database  
 overall ?) be represented in the CompanyIndex object?

 Though in any case, since the indices() method is not a SQLA relation,  
 you'd just return a user defined collection which does whatever you  
 want upon append(), remove(), etc.   If someone appends an Index  
 object, you do a query for it first and session.add() otherwise.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---