[sqlalchemy] How to set Row instance attribute

2023-05-06 Thread sector119
Hello,

I get some data from DB and have a sequence of Row objects, how it is 
possible to modify data in those objects attrs?

Something like

result = await dbsession.execute(query)
services = result.all()

for i, service in enumerate(services):
services[i].title = f"some text here {service.title}"

but I get AttributeError("can't set attribute")

Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a2552b95-e41b-4fa6-b655-3c16cf5f844en%40googlegroups.com.


Re: [sqlalchemy] "flatten" model columns in Row

2022-11-20 Thread sector119
Thank you, Mike!

воскресенье, 20 ноября 2022 г. в 16:31:09 UTC+2, Mike Bayer: 

> the quickest way at the moment is to use select(X.__table__, y_col), or, 
> if you use select(X), you can execute from session.connection().execute().
>
>   
>
> On Sat, Nov 19, 2022, at 5:31 PM, sector119 wrote:
>
> Hello
>
> For example I have two models X and Y, I want to get all columns from X 
> model and only one or two cols from Y model in the same Row as
> [(x_col1, x_col2, x_col3, y_col1), ...] but not [(X, y_col1)] as I get 
> when I perform
> select(X, Y.col1).join(X.y)
>
> I just don't want to write all X model cols down at the select statement 
> and just one from Y model like select(X.col1, X.col2, X.col3, Y.col1), X 
> model has plenty of cols, and I'm lazy )
>
> Thanks
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/e3681861-a4ac-4502-b1a4-2b36ca8648aan%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/e3681861-a4ac-4502-b1a4-2b36ca8648aan%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a2524455-4782-471e-9427-8470ee9132fcn%40googlegroups.com.


[sqlalchemy] "flatten" model columns in Row

2022-11-19 Thread sector119
Hello

For example I have two models X and Y, I want to get all columns from X 
model and only one or two cols from Y model in the same Row as
[(x_col1, x_col2, x_col3, y_col1), ...] but not [(X, y_col1)] as I get when 
I perform
select(X, Y.col1).join(X.y)

I just don't want to write all X model cols down at the select statement 
and just one from Y model like select(X.col1, X.col2, X.col3, Y.col1), X 
model has plenty of cols, and I'm lazy )

Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e3681861-a4ac-4502-b1a4-2b36ca8648aan%40googlegroups.com.


Re: [sqlalchemy] SQLAlchemy 2. psycopg3 type adapt

2022-09-11 Thread sector119
Thank You, Mike, I thought that I can adapt types with sqlalchemy, not with 
db adapter like psycopg, to make it not db adapter dependent solution.

воскресенье, 11 сентября 2022 г. в 04:14:02 UTC+3, Mike Bayer: 

> maybe email on the psycopg2 list / github tracker
>
> On Sat, Sep 10, 2022, at 4:15 PM, sector119 wrote:
>
> Hello,
>
> I got exception when I try to insert numpy.int8 type data. 
> sqlalchemy.exc.ProgrammingError: (psycopg.ProgrammingError) cannot adapt 
> type 'int8' using placeholder '%s' (format: AUTO)
>
> I'm trying to adapt it with code below, but it looks like a bit 
> comprehensive.
>
> Maybe you can suggest another solution? And what is a proper place to call 
> connection.adapters.register_dumper, is it ok to call it in that listener?
>
> *models/__init__.py*
>
> @listens_for(Pool, "connect")
> def pool_on_connect(connection, _):
> connection.adapters.register_dumper(np.integer, NumPyIntDumper)
>
> *models/adapters.py *
>
> from typing import Any
>
> import numpy as np
> from psycopg import errors as e, postgres
> from psycopg._wrappers import Int2, Int4, Int8, IntNumeric
> from psycopg.adapt import Dumper, PyFormat
>
>
> class _NumPyIntDumper(Dumper):
> def dump(self, obj: Any) -> bytes:
> t = type(obj)
>
> if not issubclass(t, np.integer):
> raise e.DataError(f"integer expected, got 
> {type(obj).__name__!r}")
>
> return str(obj).encode()
>
> def quote(self, obj: Any) -> bytes:
> value = self.dump(obj)
> return value if obj >= 0 else b" " + value
>
>
> class Int2Dumper(_NumPyIntDumper):
> oid = postgres.types["int2"].oid
>
>
> class Int4Dumper(_NumPyIntDumper):
> oid = postgres.types["int4"].oid
>
>
> class Int8Dumper(_NumPyIntDumper):
> oid = postgres.types["int8"].oid
>
>
> class IntNumericDumper(_NumPyIntDumper):
> oid = postgres.types["numeric"].oid
>
>
> class NumPyIntDumper(Dumper):
> def dump(self, obj: Any) -> bytes:
> raise TypeError(
> f"{type(self).__name__} is a dispatcher to other dumpers:"
> " dump() is not supposed to be called"
> )
>
> def get_key(self, obj: int, format: PyFormat) -> type:
> return self.upgrade(obj, format).cls
>
> _int2_dumper = Int2Dumper(Int2)  # smallint
> _int4_dumper = Int4Dumper(Int4)  # integer
> _int8_dumper = Int8Dumper(Int8)  # bigint
> _int_numeric_dumper = IntNumericDumper(IntNumeric)  # numeric
>
> def upgrade(self, obj: int, format: PyFormat) -> Dumper:
> if isinstance(obj, (np.int8, np.int16, np.uint8)):
> return self._int2_dumper
> elif isinstance(obj, (np.int32, np.uint16)):
> return self._int4_dumper
> elif isinstance(obj, (np.int64, np.uint32)):
> return self._int8_dumper
> elif isinstance(obj, np.uint64):
> return self._int_numeric_dumper
>
> raise e.DataError(f"{type(obj).__name__!r} not supported by 
> NumPyIntDumper")
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/db4fd851-bf4f-4a1e-a984-2398c027ce83n%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/db4fd851-bf4f-4a1e-a984-2398c027ce83n%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/25a58769-42e5-4c43-87d3-0cfeefa8f192n%40googlegroups.com.


[sqlalchemy] SQLAlchemy 2. psycopg3 type adapt

2022-09-10 Thread sector119
Hello,

I got exception when I try to insert numpy.int8 type data. 
sqlalchemy.exc.ProgrammingError: (psycopg.ProgrammingError) cannot adapt 
type 'int8' using placeholder '%s' (format: AUTO)

I'm trying to adapt it with code below, but it looks like a bit 
comprehensive.

Maybe you can suggest another solution? And what is a proper place to call 
connection.adapters.register_dumper, is it ok to call it in that listener?

*models/__init__.py*

@listens_for(Pool, "connect")
def pool_on_connect(connection, _):
connection.adapters.register_dumper(np.integer, NumPyIntDumper)

*models/adapters.py *

from typing import Any

import numpy as np
from psycopg import errors as e, postgres
from psycopg._wrappers import Int2, Int4, Int8, IntNumeric
from psycopg.adapt import Dumper, PyFormat


class _NumPyIntDumper(Dumper):
def dump(self, obj: Any) -> bytes:
t = type(obj)

if not issubclass(t, np.integer):
raise e.DataError(f"integer expected, got 
{type(obj).__name__!r}")

return str(obj).encode()

def quote(self, obj: Any) -> bytes:
value = self.dump(obj)
return value if obj >= 0 else b" " + value


class Int2Dumper(_NumPyIntDumper):
oid = postgres.types["int2"].oid


class Int4Dumper(_NumPyIntDumper):
oid = postgres.types["int4"].oid


class Int8Dumper(_NumPyIntDumper):
oid = postgres.types["int8"].oid


class IntNumericDumper(_NumPyIntDumper):
oid = postgres.types["numeric"].oid


class NumPyIntDumper(Dumper):
def dump(self, obj: Any) -> bytes:
raise TypeError(
f"{type(self).__name__} is a dispatcher to other dumpers:"
" dump() is not supposed to be called"
)

def get_key(self, obj: int, format: PyFormat) -> type:
return self.upgrade(obj, format).cls

_int2_dumper = Int2Dumper(Int2)  # smallint
_int4_dumper = Int4Dumper(Int4)  # integer
_int8_dumper = Int8Dumper(Int8)  # bigint
_int_numeric_dumper = IntNumericDumper(IntNumeric)  # numeric

def upgrade(self, obj: int, format: PyFormat) -> Dumper:
if isinstance(obj, (np.int8, np.int16, np.uint8)):
return self._int2_dumper
elif isinstance(obj, (np.int32, np.uint16)):
return self._int4_dumper
elif isinstance(obj, (np.int64, np.uint32)):
return self._int8_dumper
elif isinstance(obj, np.uint64):
return self._int_numeric_dumper

raise e.DataError(f"{type(obj).__name__!r} not supported by 
NumPyIntDumper")

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/db4fd851-bf4f-4a1e-a984-2398c027ce83n%40googlegroups.com.


Re: [sqlalchemy] async mode and relationship more than one level deep

2021-07-22 Thread sector119
Thank You, works like a charm

четверг, 22 июля 2021 г. в 00:26:57 UTC+3, Mike Bayer: 

> you need to chain those like:
>
> selectinload(Database.person).selectinload(Person.city)
>
> docs:
>
>
> https://docs.sqlalchemy.org/en/14/orm/loading_relationships.html#relationship-loading-with-loader-options
>
>
>
> On Wed, Jul 21, 2021, at 4:25 PM, sector119 wrote:
>
> Hello, Mike
>
> When I want to use some relationship I just set selectinload option on it 
> like
>
> s = select(Database).options(selectinload(Database.person))
>
> Here Database.person is relationship with Person model
>
> But what to do if I want to access some relationship of Person model? For 
> example Person.city,
>
> I got errors when I set selectinload(Database.person.city) or 
> selectinload(Person.city)
>
> Thank You
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/4a389836-c41c-4db5-a427-75e87a39ff7bn%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/sqlalchemy/4a389836-c41c-4db5-a427-75e87a39ff7bn%40googlegroups.com?utm_medium=email_source=footer>
> .
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/a7842e68-c737-4989-b033-8ee1543bc8ben%40googlegroups.com.


[sqlalchemy] Re: async mode and relationship more than one level deep

2021-07-21 Thread sector119
s = select(
  Database
).options(
  selectinload(
Database.person
  ).options(
joinedload(Person.city)
  )
)

среда, 21 июля 2021 г. в 23:25:05 UTC+3, sector119: 

> Hello, Mike
>
> When I want to use some relationship I just set selectinload option on it 
> like
>
> s = select(Database).options(selectinload(Database.person))
>
> Here Database.person is relationship with Person model
>
> But what to do if I want to access some relationship of Person model? For 
> example Person.city,
>
> I got errors when I set selectinload(Database.person.city) or 
> selectinload(Person.city)
>
> Thank You
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/ee1aa492-29e0-4abe-afca-c34356fb7a24n%40googlegroups.com.


[sqlalchemy] async mode and relationship more than one level deep

2021-07-21 Thread sector119
Hello, Mike

When I want to use some relationship I just set selectinload option on it 
like

s = select(Database).options(selectinload(Database.person))

Here Database.person is relationship with Person model

But what to do if I want to access some relationship of Person model? For 
example Person.city,

I got errors when I set selectinload(Database.person.city) or 
selectinload(Person.city)

Thank You

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/4a389836-c41c-4db5-a427-75e87a39ff7bn%40googlegroups.com.


[sqlalchemy] No NOWAIT for FOR UPDATE

2021-06-13 Thread sector119
Hello!

On a PostgreSQL database can't get NOWAIT with FOR UPDATE clause (

sqlalchemy 1.4.18

from sqlalchemy.future import select
from unity.models.unity import Internal

sql = select(
Internal.person_id_internal
).with_for_update(nowait=True)

str(sql)
Out[12]: 'SELECT unity.internals.person_id_internal \nFROM unity.internals 
FOR UPDATE'

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/587b5317-e275-4c22-a78e-460eeafff1c1n%40googlegroups.com.


[sqlalchemy] Set PostgreSQL default index tablespace

2021-01-24 Thread sector119
Hello

Is it possible to set default tablespace for all indexes?

I know that I cat set it with
Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')

But I want to set set it by default somehow, that when I just put 
"index=True" on column, I get index created at some tablespace.

Thank You

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/2d9fca7c-6df1-4632-a97f-43a60cdae4ecn%40googlegroups.com.


Re: [sqlalchemy] Overlapping FK problem

2020-12-28 Thread sector119
Thanks!

service = relationship('Service', primaryjoin='and_(Service.schema == 
Transaction.schema, Service.id == foreign(Transaction.service_id))')
eliminates the error

But why I have no need to do the same with organization = relationship(...) 
?

Because Service model references Organization one ?
 

понедельник, 28 декабря 2020 г. в 19:53:24 UTC+2, Mike Bayer: 

>
>
> On Mon, Dec 28, 2020, at 12:37 PM, sector119 wrote:
>
> Thank You, Mike,
>
> Do you mean that I have to remove all foreign() annotations from all my 
> relationships like this?
> service = relationship(*'Service'*, primaryjoin=*'and_(Service.schema == 
> Transaction.schema, Service.id == Transaction.service_id)'*)
> organization = relationship(*'Organization'*, 
> primaryjoin=*'and_(Organization.schema 
> == Transaction.schema, Organization.id == Transaction.organization_id)'*)
> person = relationship(*'Person'*, primaryjoin=*'and_(Person.schema == 
> Transaction.schema, Person.id == Transaction.person_id)'*)
>
>
>
> no, only the foreign() annotation that surrounds "Transaction.schema".  
> leave the one that surrounds "Transaction.service_id".   that should 
> eliminate the error.
>
>
>
> if so, I still get the same warning..
>
> And must I set primaryjoin for service and organization relationships at 
> all if I've set up FK for them
> ForeignKeyConstraint(
> (schema, service_id),
> (Service.schema, Service.id)
> )
> and
> ForeignKeyConstraint(
> (schema, organization_id),
> (Organization.schema, Organization.id)
> ),
>
>
> понедельник, 28 декабря 2020 г. в 17:39:25 UTC+2, Mike Bayer: 
>
>
> by having foreign() on the Transaction.schema column, that means when you 
> do this:
>
> t1 = Transaction()
>
> t1.service = some_service()
>
> the ORM is being instructed to copy some_service.schema over to 
> t1.schema.   Because "foreign" means "this is the column that mirrors the 
> value of a canonical value on the related row".
>
> so you have to decide what should happen if you did this:
>
> t1.service = Service(schema="a", ...)
> t1.organiazation = Organization(schema="b", ...)
> t1.person = Person(schema="c", ...)
>
> is the above possible?  or an error condition?
>
> overall, if the plan is that "schema" will match across all the objects 
> involved, and your application will make sure those are all set as needed, 
> just remove the foreign() annotation from the Transaction.service column.  
> the primary joins already have enough information based on the service_id, 
> organization_id and person_id columns.
>
>
> On Mon, Dec 28, 2020, at 6:04 AM, sector119 wrote:
>
> I get following warning with my model:
>
> SAWarning: relationship 'Transaction.organization' will copy column 
> organizations.schema to column transactions.schema, which conflicts with 
> relationship(s): 'Transaction.service' (copies services.schema to 
> transactions.schema). If this is not the intention, consider if these 
> relationships should be linked with back_populates, or if viewonly=True 
> should be applied to one or more if they are read-only. For the less common 
> case that foreign key constraints are partially overlapping, the 
> orm.foreign() annotation can be used to isolate the columns that should be 
> written towards.   The 'overlaps' parameter may be used to remove this 
> warning.
>
> I have all my tables partitioned by "schema" column so I have to put it to 
> every PK and FK
> I add "primaryjoin" to my relations as described at 
> https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#overlapping-foreign-keys
> but still get that warning. How can I fix it?
>
> Thank You 
>
> class Transaction(Base):
> __tablename__ = *'transactions'*
>
> schema = Column(String(63), nullable=False, index=True)
> id = Column(BigInteger, nullable=False, index=True)
>
> user_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.users.id 
> <http://users.id>'*), nullable=False, index=True)
> office_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.offices.id 
> <http://offices.id>'*), nullable=False, index=True)
> service_id = Column(Integer, nullable=False, index=True)
> organization_id = Column(Integer, nullable=False, index=True)
>
> ...
>
> service = relationship(*'Service'*,
> primaryjoin=*'and_(Service.schema == foreign(Transaction.schema), 
> Service.id == foreign(Transaction.service_id))'*)
> organization = relationship(*'Organization'*,
> primaryjoin=*'and_(Organization.schema == foreign(Transaction.schema), 
> Organization.id == foreign(Transaction.organization_id))'*)
> person = relationship(*'Pers

Re: [sqlalchemy] Overlapping FK problem

2020-12-28 Thread sector119
>>> is the above possible?  or an error condition?

I don't want to restrict that case 

>>> overall, if the plan is that "schema" will match across all the objects 
involved, and your application will make sure those are all set as needed, 
just
>>> remove the foreign() annotation from the Transaction.service column.  
the primary joins already have enough information based on the
>>> service_id, organization_id and person_id columns.

"schema" will match across all the objects - 100%



понедельник, 28 декабря 2020 г. в 17:39:25 UTC+2, Mike Bayer: 

> by having foreign() on the Transaction.schema column, that means when you 
> do this:
>
> t1 = Transaction()
>
> t1.service = some_service()
>
> the ORM is being instructed to copy some_service.schema over to 
> t1.schema.   Because "foreign" means "this is the column that mirrors the 
> value of a canonical value on the related row".
>
> so you have to decide what should happen if you did this:
>
> t1.service = Service(schema="a", ...)
> t1.organiazation = Organization(schema="b", ...)
> t1.person = Person(schema="c", ...)
>
> is the above possible?  or an error condition?
>
> overall, if the plan is that "schema" will match across all the objects 
> involved, and your application will make sure those are all set as needed, 
> just remove the foreign() annotation from the Transaction.service column.  
> the primary joins already have enough information based on the service_id, 
> organization_id and person_id columns.
>
>
> On Mon, Dec 28, 2020, at 6:04 AM, sector119 wrote:
>
> I get following warning with my model:
>
> SAWarning: relationship 'Transaction.organization' will copy column 
> organizations.schema to column transactions.schema, which conflicts with 
> relationship(s): 'Transaction.service' (copies services.schema to 
> transactions.schema). If this is not the intention, consider if these 
> relationships should be linked with back_populates, or if viewonly=True 
> should be applied to one or more if they are read-only. For the less common 
> case that foreign key constraints are partially overlapping, the 
> orm.foreign() annotation can be used to isolate the columns that should be 
> written towards.   The 'overlaps' parameter may be used to remove this 
> warning.
>
> I have all my tables partitioned by "schema" column so I have to put it to 
> every PK and FK
> I add "primaryjoin" to my relations as described at 
> https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#overlapping-foreign-keys
> but still get that warning. How can I fix it?
>
> Thank You 
>
> class Transaction(Base):
> __tablename__ = *'transactions'*
>
> schema = Column(String(63), nullable=False, index=True)
> id = Column(BigInteger, nullable=False, index=True)
>
> user_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.users.id 
> <http://users.id>'*), nullable=False, index=True)
> office_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.offices.id 
> <http://offices.id>'*), nullable=False, index=True)
> service_id = Column(Integer, nullable=False, index=True)
> organization_id = Column(Integer, nullable=False, index=True)
>
> ...
>
> service = relationship(*'Service'*,
> primaryjoin=*'and_(Service.schema == foreign(Transaction.schema), 
> Service.id == foreign(Transaction.service_id))'*)
> organization = relationship(*'Organization'*,
> primaryjoin=*'and_(Organization.schema == foreign(Transaction.schema), 
> Organization.id == foreign(Transaction.organization_id))'*)
> person = relationship(*'Person'*,
> primaryjoin=*'and_(Person.schema == foreign(Transaction.schema), 
> Person.id == foreign(Transaction.person_id))'*)
> rollback_user = relationship(*'User'*, primaryjoin=*'User.id == 
> Transaction.rollback_user_id'*)
>
> __table_args__ = (
> PrimaryKeyConstraint(*'schema'*, *'id'*),
> ForeignKeyConstraint(
> (schema, service_id),
> (Service.schema, Service.id)
> ),
> ForeignKeyConstraint(
> (schema, organization_id),
> (Organization.schema, Organization.id)
> ),
> {
> *'postgresql_partition_by'*: 
> *'LIST (schema)'* }
> )
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchem

Re: [sqlalchemy] Overlapping FK problem

2020-12-28 Thread sector119
Thank You, Mike,

Do you mean that I have to remove all foreign() annotations from all my 
relationships like this?
service = relationship('Service', primaryjoin='and_(Service.schema == 
Transaction.schema, Service.id == Transaction.service_id)')
organization = relationship('Organization', 
primaryjoin='and_(Organization.schema 
== Transaction.schema, Organization.id == Transaction.organization_id)')
person = relationship('Person', primaryjoin='and_(Person.schema == 
Transaction.schema, Person.id == Transaction.person_id)')

if so, I still get the same warning..

And must I set primaryjoin for service and organization relationships at 
all if I've set up FK for them
ForeignKeyConstraint(
(schema, service_id),
(Service.schema, Service.id)
)
and
ForeignKeyConstraint(
(schema, organization_id),
(Organization.schema, Organization.id)
),


понедельник, 28 декабря 2020 г. в 17:39:25 UTC+2, Mike Bayer: 

> by having foreign() on the Transaction.schema column, that means when you 
> do this:
>
> t1 = Transaction()
>
> t1.service = some_service()
>
> the ORM is being instructed to copy some_service.schema over to 
> t1.schema.   Because "foreign" means "this is the column that mirrors the 
> value of a canonical value on the related row".
>
> so you have to decide what should happen if you did this:
>
> t1.service = Service(schema="a", ...)
> t1.organiazation = Organization(schema="b", ...)
> t1.person = Person(schema="c", ...)
>
> is the above possible?  or an error condition?
>
> overall, if the plan is that "schema" will match across all the objects 
> involved, and your application will make sure those are all set as needed, 
> just remove the foreign() annotation from the Transaction.service column.  
> the primary joins already have enough information based on the service_id, 
> organization_id and person_id columns.
>
>
> On Mon, Dec 28, 2020, at 6:04 AM, sector119 wrote:
>
> I get following warning with my model:
>
> SAWarning: relationship 'Transaction.organization' will copy column 
> organizations.schema to column transactions.schema, which conflicts with 
> relationship(s): 'Transaction.service' (copies services.schema to 
> transactions.schema). If this is not the intention, consider if these 
> relationships should be linked with back_populates, or if viewonly=True 
> should be applied to one or more if they are read-only. For the less common 
> case that foreign key constraints are partially overlapping, the 
> orm.foreign() annotation can be used to isolate the columns that should be 
> written towards.   The 'overlaps' parameter may be used to remove this 
> warning.
>
> I have all my tables partitioned by "schema" column so I have to put it to 
> every PK and FK
> I add "primaryjoin" to my relations as described at 
> https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#overlapping-foreign-keys
> but still get that warning. How can I fix it?
>
> Thank You 
>
> class Transaction(Base):
> __tablename__ = *'transactions'*
>
> schema = Column(String(63), nullable=False, index=True)
> id = Column(BigInteger, nullable=False, index=True)
>
> user_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.users.id 
> <http://users.id>'*), nullable=False, index=True)
> office_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.offices.id 
> <http://offices.id>'*), nullable=False, index=True)
> service_id = Column(Integer, nullable=False, index=True)
> organization_id = Column(Integer, nullable=False, index=True)
>
> ...
>
> service = relationship(*'Service'*,
> primaryjoin=*'and_(Service.schema == foreign(Transaction.schema), 
> Service.id == foreign(Transaction.service_id))'*)
> organization = relationship(*'Organization'*,
> primaryjoin=*'and_(Organization.schema == foreign(Transaction.schema), 
> Organization.id == foreign(Transaction.organization_id))'*)
> person = relationship(*'Person'*,
> primaryjoin=*'and_(Person.schema == foreign(Transaction.schema), 
> Person.id == foreign(Transaction.person_id))'*)
> rollback_user = relationship(*'User'*, primaryjoin=*'User.id == 
> Transaction.rollback_user_id'*)
>
> __table_args__ = (
> PrimaryKeyConstraint(*'schema'*, *'id'*),
> ForeignKeyConstraint(
> (schema, service_id),
> (Service.schema, Service.id)
> ),
> ForeignKeyConstraint(
> (schema, organization_id),
> (Organization.schema, Organization.id)
> ),
> {
> *'postgresql_partition_by'*: 
> *'LIST (schema)'* }
> )
>
>
> -- 
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. 

[sqlalchemy] Overlapping FK problem

2020-12-28 Thread sector119
I get following warning with my model:

SAWarning: relationship 'Transaction.organization' will copy column 
organizations.schema to column transactions.schema, which conflicts with 
relationship(s): 'Transaction.service' (copies services.schema to 
transactions.schema). If this is not the intention, consider if these 
relationships should be linked with back_populates, or if viewonly=True 
should be applied to one or more if they are read-only. For the less common 
case that foreign key constraints are partially overlapping, the 
orm.foreign() annotation can be used to isolate the columns that should be 
written towards.   The 'overlaps' parameter may be used to remove this 
warning.

I have all my tables partitioned by "schema" column so I have to put it to 
every PK and FK
I add "primaryjoin" to my relations as described 
at 
https://docs.sqlalchemy.org/en/14/orm/join_conditions.html#overlapping-foreign-keys
but still get that warning. How can I fix it?

Thank You 

class Transaction(Base):
__tablename__ = 'transactions'

schema = Column(String(63), nullable=False, index=True)
id = Column(BigInteger, nullable=False, index=True)

user_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + '.users.id'), 
nullable=False, index=True)
office_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + '.offices.id'), 
nullable=False, index=True)
service_id = Column(Integer, nullable=False, index=True)
organization_id = Column(Integer, nullable=False, index=True)

...

service = relationship('Service',
primaryjoin='and_(Service.schema == foreign(Transaction.schema), Service.id 
== foreign(Transaction.service_id))')
organization = relationship('Organization',
primaryjoin='and_(Organization.schema == foreign(Transaction.schema), 
Organization.id == foreign(Transaction.organization_id))')
person = relationship('Person',
primaryjoin='and_(Person.schema == foreign(Transaction.schema), Person.id 
== foreign(Transaction.person_id))')
rollback_user = relationship('User', primaryjoin='User.id == 
Transaction.rollback_user_id')

__table_args__ = (
PrimaryKeyConstraint('schema', 'id'),
ForeignKeyConstraint(
(schema, service_id),
(Service.schema, Service.id)
),
ForeignKeyConstraint(
(schema, organization_id),
(Organization.schema, Organization.id)
),
{
'postgresql_partition_by': 'LIST (schema)'
}
)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/0d422feb-956f-4efc-b3c0-17473652c603n%40googlegroups.com.


Re: [sqlalchemy] from_statement and cte problem

2019-07-01 Thread sector119
For example I have a property 'path' with CTE like this and it wouldn't 
return set of rows, but scalar value as array[] of rows
would it be possible to load this property as subquery() ?

Of course I can include that CTE query in my query(Locality) using 
subquery(), but it would be cool if I'll be able to "undefer" that property 
somehow like relation )

@property
def path(self):
session = object_session(self)

def get_locality_path_q(locality_id):
parent_q = session.query(Locality).filter(Locality.id == 
locality_id).cte(recursive=True)

parents = aliased(parent_q)

locality_alias = aliased(Locality)

q = parent_q.union_all(
session.query(locality_alias).join(parents, locality_alias.id 
== parents.c.parent_id)
)

cte = aliased(Locality, q)

return session.query(cte).order_by(cte.id)

return get_locality_path_q(self.id)

вторник, 2 июля 2019 г., 0:03:20 UTC+3 пользователь Mike Bayer написал:
>
> I don't know what you mean.  is there SQL you know you want or is that the 
> part you want to figure out?
>
>
> On Mon, Jul 1, 2019, at 4:21 PM, sector119 wrote:
>
> Mike, is it possible to load "@property" as subquery? Maybe as 
> ARRAY[path_item, path_item, ...]
>
> воскресенье, 30 июня 2019 г., 20:26:42 UTC+3 пользователь Mike Bayer 
> написал:
>
>
>
> On Sun, Jun 30, 2019, at 5:37 AM, sector119 wrote:
>
> Nice, thanks a lot, Mike, now it works as expected
>
>
> that's great.  the docs are not good here, there's not enough discussion 
> of "aliased()" , from_statement() and what they do, also I'm changing some 
> details of how they do their work for 1.4 in any case so documentation 
> efforts will be needed.
>
>
>
>
> @property
> *def *path(self):
> session = object_session(self)
>
> *def *get_locality_path_q(locality_id):
> parent_q = session.query(Locality).filter(Locality.id == 
> locality_id).cte(recursive=*True*)
>
> parents = aliased(parent_q)
>
> locality_alias = aliased(Locality)
>
> q = parent_q.union_all(
> session.query(locality_alias).join(parents, locality_alias.id == 
> parents.c.parent_id)
> )
>
> cte = aliased(Locality, q)
>
> *return *session.query(cte).order_by(cte.id)
>
> *return *get_locality_path_q(self.id)
>
>
> воскресенье, 30 июня 2019 г., 1:11:21 UTC+3 пользователь Mike Bayer 
> написал:
>
>
>
> On Sat, Jun 29, 2019, at 11:24 AM, sector119 wrote:
>
> Hello,
>
>
> I have Locality model with 'path' property to get path from 'root' of tree to 
> current item, everything works ok, but
>
> I can't get result as Locality instance list..
>
> When I use 
> *'*object_session(self).query(Locality).from_statement(q).order_by(Locality.id)'
>  I get 
>
> sqlalchemy.exc.ArgumentError: from_statement accepts text(), select(), and 
> union() objects only.
>
>
> How can I adopt results to Locality model?
>
>
>
> *class *Locality(Base):
> __tablename__ = 
> *'localities'*
> __table_args__ = {*'schema'*: SYSTEM_SCHEMA}
>
> id = Column(Integer, primary_key=*True*)
> parent_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.localities.id 
> <http://localities.id>'*))
> name = Column(UnicodeText, nullable=*False*)
> type = Column(Integer, nullable=*False*)
>
> @property
> *def *path(self):
> *def *get_locality_path_q(locality_id):
> top_q = select([
> Locality.id,
> Locality.parent_id,
> Locality.name,
> Locality.type,
> ]).\
> where(Locality.id == locality_id).\
> cte(recursive=*True*)
>
> parents = aliased(top_q)
>
> locality_alias = aliased(Locality)
>
> q = top_q.union_all(
> select([
> locality_alias.id,
> locality_alias.parent_id,
> locality_alias.name,
> locality_alias.type
> ]).select_from(join(locality_alias, parents, 
> locality_alias.id == parents.c.parent_id))
> )
>
> 
> *# return object_session(self).query(q).order_by(q.c.id <http://q.c.id>)* 
>*return 
> *object_session(self).query(Locality).from_statement(q).order_by(Locality.id)
>
> *return *get_locality_path_q(self.id)
>
>
> above, "q" is a CTE, not a SELECT, meaning it's something you can select 
> FROM, like a table.  Call select() to SELECT from it:
>
> return (
> obje

Re: [sqlalchemy] from_statement and cte problem

2019-07-01 Thread sector119
Mike, is it possible to load "@property" as subquery? Maybe as 
ARRAY[path_item, path_item, ...]

воскресенье, 30 июня 2019 г., 20:26:42 UTC+3 пользователь Mike Bayer 
написал:
>
>
>
> On Sun, Jun 30, 2019, at 5:37 AM, sector119 wrote:
>
> Nice, thanks a lot, Mike, now it works as expected
>
>
> that's great.  the docs are not good here, there's not enough discussion 
> of "aliased()" , from_statement() and what they do, also I'm changing some 
> details of how they do their work for 1.4 in any case so documentation 
> efforts will be needed.
>
>
>
>
> @property
> *def *path(self):
> session = object_session(self)
>
> *def *get_locality_path_q(locality_id):
> parent_q = session.query(Locality).filter(Locality.id == 
> locality_id).cte(recursive=*True*)
>
> parents = aliased(parent_q)
>
> locality_alias = aliased(Locality)
>
> q = parent_q.union_all(
> session.query(locality_alias).join(parents, locality_alias.id == 
> parents.c.parent_id)
> )
>
> cte = aliased(Locality, q)
>
> *return *session.query(cte).order_by(cte.id)
>
> *return *get_locality_path_q(self.id)
>
>
> воскресенье, 30 июня 2019 г., 1:11:21 UTC+3 пользователь Mike Bayer 
> написал:
>
>
>
> On Sat, Jun 29, 2019, at 11:24 AM, sector119 wrote:
>
> Hello,
>
>
> I have Locality model with 'path' property to get path from 'root' of tree to 
> current item, everything works ok, but
>
> I can't get result as Locality instance list..
>
> When I use 
> *'*object_session(self).query(Locality).from_statement(q).order_by(Locality.id)'
>  I get 
>
> sqlalchemy.exc.ArgumentError: from_statement accepts text(), select(), and 
> union() objects only.
>
>
> How can I adopt results to Locality model?
>
>
>
> *class *Locality(Base):
> __tablename__ = 
> *'localities'*
> __table_args__ = {*'schema'*: SYSTEM_SCHEMA}
>
> id = Column(Integer, primary_key=*True*)
> parent_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.localities.id 
> <http://localities.id>'*))
> name = Column(UnicodeText, nullable=*False*)
> type = Column(Integer, nullable=*False*)
>
> @property
> *def *path(self):
> *def *get_locality_path_q(locality_id):
> top_q = select([
> Locality.id,
> Locality.parent_id,
> Locality.name,
> Locality.type,
> ]).\
> where(Locality.id == locality_id).\
> cte(recursive=*True*)
>
> parents = aliased(top_q)
>
> locality_alias = aliased(Locality)
>
> q = top_q.union_all(
> select([
> locality_alias.id,
> locality_alias.parent_id,
> locality_alias.name,
> locality_alias.type
> ]).select_from(join(locality_alias, parents, 
> locality_alias.id == parents.c.parent_id))
> )
>
> 
> *# return object_session(self).query(q).order_by(q.c.id <http://q.c.id>)* 
>*return 
> *object_session(self).query(Locality).from_statement(q).order_by(Locality.id)
>
> *return *get_locality_path_q(self.id)
>
>
> above, "q" is a CTE, not a SELECT, meaning it's something you can select 
> FROM, like a table.  Call select() to SELECT from it:
>
> return (
> object_session(self)
> .query(Locality)
> .from_statement(q.select().order_by(q.c.id))
> )
>
>
> additionally, from_statement() does not allow further changes to the 
> statement and the ORDER BY must be in terms of thing you are selecting 
> from, in this case q.c.id
>
> your code would be easier if you made use of top_q as a FROM object rather 
> than a statement:
>
> lcte = aliased(Locality, q)
>
> return (
> object_session(self)
> .query(lcte)
> .order_by(lcte.id)
> )
>
>
> There's not too many doc examples of how aliased() works with FROM clauses 
> but the basic idea is at:
>
>
> https://docs.sqlalchemy.org/en/13/orm/tutorial.html#selecting-entities-from-subqueries
>
> https://docs.sqlalchemy.org/en/13/orm/query.html#sqlalchemy.orm.aliased
>
> A little more on aliased() with CTE is written more for query.cte() which 
> you could also be using here:
>
>
> https://docs.sqlalchemy.org/en/13/orm/query.html?highlight=cte#sqlalchemy.orm.query.Query.cte
>
>
>
>

Re: [sqlalchemy] from_statement and cte problem

2019-06-30 Thread sector119
Nice, thanks a lot, Mike, now it works as expected

@property
def path(self):
session = object_session(self)

def get_locality_path_q(locality_id):
parent_q = session.query(Locality).filter(Locality.id == 
locality_id).cte(recursive=True)

parents = aliased(parent_q)

locality_alias = aliased(Locality)

q = parent_q.union_all(
session.query(locality_alias).join(parents, locality_alias.id == 
parents.c.parent_id)
)

cte = aliased(Locality, q)

return session.query(cte).order_by(cte.id)

return get_locality_path_q(self.id)


воскресенье, 30 июня 2019 г., 1:11:21 UTC+3 пользователь Mike Bayer написал:
>
>
>
> On Sat, Jun 29, 2019, at 11:24 AM, sector119 wrote:
>
> Hello,
>
>
> I have Locality model with 'path' property to get path from 'root' of tree to 
> current item, everything works ok, but
>
> I can't get result as Locality instance list..
>
> When I use 
> *'*object_session(self).query(Locality).from_statement(q).order_by(Locality.id)'
>  I get 
>
> sqlalchemy.exc.ArgumentError: from_statement accepts text(), select(), and 
> union() objects only.
>
>
> How can I adopt results to Locality model?
>
>
>
> *class *Locality(Base):
> __tablename__ = 
> *'localities'*
> __table_args__ = {*'schema'*: SYSTEM_SCHEMA}
>
> id = Column(Integer, primary_key=*True*)
> parent_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + *'.localities.id 
> <http://localities.id>'*))
> name = Column(UnicodeText, nullable=*False*)
> type = Column(Integer, nullable=*False*)
>
> @property
> *def *path(self):
> *def *get_locality_path_q(locality_id):
> top_q = select([
> Locality.id,
> Locality.parent_id,
> Locality.name,
> Locality.type,
> ]).\
> where(Locality.id == locality_id).\
> cte(recursive=*True*)
>
> parents = aliased(top_q)
>
> locality_alias = aliased(Locality)
>
> q = top_q.union_all(
> select([
> locality_alias.id,
> locality_alias.parent_id,
> locality_alias.name,
> locality_alias.type
> ]).select_from(join(locality_alias, parents, 
> locality_alias.id == parents.c.parent_id))
> )
>
> 
> *# return object_session(self).query(q).order_by(q.c.id <http://q.c.id>)* 
>*return 
> *object_session(self).query(Locality).from_statement(q).order_by(Locality.id)
>
> *return *get_locality_path_q(self.id)
>
>
> above, "q" is a CTE, not a SELECT, meaning it's something you can select 
> FROM, like a table.  Call select() to SELECT from it:
>
> return (
> object_session(self)
> .query(Locality)
> .from_statement(q.select().order_by(q.c.id))
> )
>
>
> additionally, from_statement() does not allow further changes to the 
> statement and the ORDER BY must be in terms of thing you are selecting 
> from, in this case q.c.id
>
> your code would be easier if you made use of top_q as a FROM object rather 
> than a statement:
>
> lcte = aliased(Locality, q)
>
> return (
> object_session(self)
> .query(lcte)
> .order_by(lcte.id)
> )
>
>
> There's not too many doc examples of how aliased() works with FROM clauses 
> but the basic idea is at:
>
>
> https://docs.sqlalchemy.org/en/13/orm/tutorial.html#selecting-entities-from-subqueries
>
> https://docs.sqlalchemy.org/en/13/orm/query.html#sqlalchemy.orm.aliased
>
> A little more on aliased() with CTE is written more for query.cte() which 
> you could also be using here:
>
>
> https://docs.sqlalchemy.org/en/13/orm/query.html?highlight=cte#sqlalchemy.orm.query.Query.cte
>
>
>
>
>
>
> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlal...@googlegroups.com .
> To post to this group, send email to sqlal...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/sqlalchemy.
> To

[sqlalchemy] from_statement and cte problem

2019-06-29 Thread sector119


Hello,


I have Locality model with 'path' property to get path from 'root' of tree to 
current item, everything works ok, but

I can't get result as Locality instance list..

When I use 
*'*object_session(self).query(Locality).from_statement(q).order_by(Locality.id)'
 I get 

sqlalchemy.exc.ArgumentError: from_statement accepts text(), select(), and 
union() objects only.


How can I adopt results to Locality model?



class Locality(Base):
__tablename__ = 'localities'

__table_args__ = {'schema': SYSTEM_SCHEMA}

id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey(SYSTEM_SCHEMA + '.localities.id'))
name = Column(UnicodeText, nullable=False)
type = Column(Integer, nullable=False)

@property
def path(self):
def get_locality_path_q(locality_id):
top_q = select([
Locality.id,
Locality.parent_id,
Locality.name,
Locality.type,
]).\
where(Locality.id == locality_id).\
cte(recursive=True)

parents = aliased(top_q)

locality_alias = aliased(Locality)

q = top_q.union_all(
select([
locality_alias.id,
locality_alias.parent_id,
locality_alias.name,
locality_alias.type
]).select_from(join(locality_alias, parents, locality_alias.id 
== parents.c.parent_id))
)

# return object_session(self).query(q).order_by(q.c.id)
return 
object_session(self).query(Locality).from_statement(q).order_by(Locality.id)

return get_locality_path_q(self.id)

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/78205703-4631-4c1a-aa13-f89fc6f06feb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-09-01 Thread sector119


> if you want individual migration sections for each schema, with or 
> without "schema" written in, there's ways to do all that also but that 
> doesn't seem necessary if you are sharing a single model with multiple 
> identical schemas. 
>

The problem is that some one might alter some tables at some schemas with 
plain sql in psql ((
So it would be perfect to have individual migration sections for each 
schema, _with_ "schema" written in.

Thank you

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-08-31 Thread sector119


from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from epsilon.models.meta import metadata
target_metadata = metadata

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.

schema_names = config.get_main_option('schemas')


def run_migrations_offline():
"""Run migrations in 'offline' mode.

This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well.  By skipping the Engine creation
we don't even need a DBAPI to be available.

Calls to context.execute() here emit the given string to the
script output.

"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=target_metadata, literal_binds=True)

with context.begin_transaction():
context.run_migrations()


def run_migrations_online():
"""Run migrations in 'online' mode.

In this scenario we need to create an Engine
and associate a connection with the context.

"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)

with connectable.connect() as connection:
for schema_name in schema_names.split():
conn = connection.execution_options(schema_translate_map={None: 
schema_name})

print("Migrating schema %s" % schema_name)

context.configure(
connection=conn,
target_metadata=target_metadata
)

with context.begin_transaction():
context.run_migrations()


if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()


четверг, 30 августа 2018 г., 16:09:35 UTC+3 пользователь Mike Bayer написал:
>
> On Thu, Aug 30, 2018 at 7:11 AM, sector119  > wrote: 
> > Mike, but in run_migrations_online() I use conn = 
> > connection.execution_options(schema_translate_map={None: schema_name}) 
> > But I get no schemas at resulting alembic/versions/file.py 
>
> can you share your env.py 
>
> > 
> > 
> > среда, 29 августа 2018 г., 20:46:07 UTC+3 пользователь Mike Bayer 
> написал: 
> >> 
> >> On Wed, Aug 29, 2018 at 5:12 AM, sector119  wrote: 
> >> > Hello 
> >> > 
> >> > I have N schemas with the same set of tables, 1 system schema with 
> >> > users, 
> >> > groups, ... tables and 6 schemas with streets, organizations, 
> >> > transactions, 
> >> > ... tables. 
> >> > On those schemas tables I don't set __table_args__ = ({'schema': 
> >> > SCHEMA},) 
> >> > I just call dbsession.execute('SET search_path TO system, %s' % 
> SCHEMA) 
> >> > before sql queries. 
> >> > 
> >> > When I make some changes in my model structures I want to refactor 
> table 
> >> > in 
> >> > all schemas using Alembic, how can I do that? 
> >> > Maybe I can make some loop over my schemas somewhere? 
> >> 
> >> setting the search path is going to confuse SQLAlchemy's table 
> >> reflection process, such that it assumes a Table of a certain schema 
> >> does not require a "schema" argument, because it is already in the 
> >> search path. 
> >> 
> >> Keep the search path set to "public", see 
> >> 
> >> 
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path.
>  
>
> >> There is an option to change this behavior mentioned in that 
> >> section called postgresql_ignore_search_path, however it isn't 
> >> guaranteed to suit all use cases.   if that makes your case work, then 
> >> that would be all you need.  if not, then read on... 
> >> 
> >> For the officially supported way to do this, you want to have the 
> >> explicit schema name inside the SQL - but this can be automated for a 
> >> multi-tenancy application.  Use the schema translation map feature: 
> >> 
> >> 
> http:

Re: [sqlalchemy] Alembic and postgresql multiple schema question

2018-08-30 Thread sector119
Mike, but in run_migrations_online() I use conn = 
connection.execution_options(schema_translate_map={None: schema_name})
But I get no schemas at resulting alembic/versions/file.py


среда, 29 августа 2018 г., 20:46:07 UTC+3 пользователь Mike Bayer написал:
>
> On Wed, Aug 29, 2018 at 5:12 AM, sector119  > wrote: 
> > Hello 
> > 
> > I have N schemas with the same set of tables, 1 system schema with 
> users, 
> > groups, ... tables and 6 schemas with streets, organizations, 
> transactions, 
> > ... tables. 
> > On those schemas tables I don't set __table_args__ = ({'schema': 
> SCHEMA},) 
> > I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
> > before sql queries. 
> > 
> > When I make some changes in my model structures I want to refactor table 
> in 
> > all schemas using Alembic, how can I do that? 
> > Maybe I can make some loop over my schemas somewhere? 
>
> setting the search path is going to confuse SQLAlchemy's table 
> reflection process, such that it assumes a Table of a certain schema 
> does not require a "schema" argument, because it is already in the 
> search path. 
>
> Keep the search path set to "public", see 
>
> http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#remote-schema-table-introspection-and-postgresql-search-path.
>  
>
> There is an option to change this behavior mentioned in that 
> section called postgresql_ignore_search_path, however it isn't 
> guaranteed to suit all use cases.   if that makes your case work, then 
> that would be all you need.  if not, then read on... 
>
> For the officially supported way to do this, you want to have the 
> explicit schema name inside the SQL - but this can be automated for a 
> multi-tenancy application.  Use the schema translation map feature: 
>
> http://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execution_options#schema-translating.
>  
>
>
>
> > 
> > 
> > Thanks 
> > 
> > -- 
> > SQLAlchemy - 
> > The Python SQL Toolkit and Object Relational Mapper 
> > 
> > http://www.sqlalchemy.org/ 
> > 
> > To post example code, please provide an MCVE: Minimal, Complete, and 
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> > description. 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sqlalchemy+...@googlegroups.com . 
> > To post to this group, send email to sqlal...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Alembic and postgresql multiple schema question

2018-08-29 Thread sector119


I've found some example at 
https://stackoverflow.com/questions/21109218/alembic-support-for-multiple-postgres-schemas

But when I run alembic revision --autogenerate -m "Initial upgrade" at 
alembic/versions/24648f118be9_initial_upgrade.py I've got no schema='myschema' 
keywords on table, indexes, columns items ((


def run_migrations_online():
"""Run migrations in 'online' mode.

In this scenario we need to create an Engine
and associate a connection with the context.

"""
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)

with connectable.connect() as connection:
for schema_name in schema_names.split():
conn = connection.execution_options(schema_translate_map={None: 
schema_name})

print("Migrating schema %s" % schema_name)

context.configure(
connection=conn,
target_metadata=target_metadata
)

with context.begin_transaction():
context.run_migrations()



среда, 29 августа 2018 г., 12:12:19 UTC+3 пользователь sector119 написал:
>
> Hello
>
> I have N schemas with the same set of tables, 1 system schema with users, 
> groups, ... tables and 6 schemas with streets, organizations, transactions, 
> ... tables. 
> On those schemas tables I don't set __table_args__ = ({'schema': SCHEMA},)
> I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
> before sql queries.
>
> When I make some changes in my model structures I want to refactor table 
> in all schemas using Alembic, how can I do that?
> Maybe I can make some loop over my schemas somewhere? 
>
>
> Thanks
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Alembic and postgresql multiple schema question

2018-08-29 Thread sector119
Hello

I have N schemas with the same set of tables, 1 system schema with users, 
groups, ... tables and 6 schemas with streets, organizations, transactions, 
... tables. 
On those schemas tables I don't set __table_args__ = ({'schema': SCHEMA},)
I just call dbsession.execute('SET search_path TO system, %s' % SCHEMA) 
before sql queries.

When I make some changes in my model structures I want to refactor table in 
all schemas using Alembic, how can I do that?
Maybe I can make some loop over my schemas somewhere? 


Thanks

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Extra table name in FROM clause

2018-01-05 Thread sector119


Here is SQL that works as expected:


SELECT
  m.*,
  d.*
FROM (
  SELECT
   person_id,
   service_id,
   person_id_internal,
   meters_readings, -- two-dimensional array
   meters_readings [generate_subscripts(meters_readings, 1)] [1] AS meter_id,
   meters_readings [generate_subscripts(meters_readings, 1)] [2] AS 
organization_reading,
   date
  FROM
   databases
) d LEFT OUTER JOIN LATERAL (
  SELECT DISTINCT ON (service_id, person_id_internal, meter_id)
user_id,
reading
  FROM meter_readings
  WHERE
person_id = d.person_id AND
service_id = d.service_id AND
meter_id = d.meter_id AND
commit_date > d.date AND
rollback_date IS NULL AND
reading IS NOT NULL
  ORDER BY
service_id,
person_id_internal,
meter_id,
commit_date DESC,
commit_time DESC
) m ON TRUE
WHERE
  d.person_id = 2099000301470;



And that is SA query that also works :) I managed to write it ))


def get_person_meters_q(dbsession, person_id):
database = dbsession.query(
Database.person_id,
Database.service_id,
Database.person_id_internal,
Database.date,
(
type_coerce(
Database.meters_readings[
func.generate_subscripts(Database.meters_readings, 1)
],
type_=ARRAY(Text)
)[1]
).label('meter_id'),
(
type_coerce(
Database.meters_readings[
func.generate_subscripts(Database.meters_readings, 1)
],
type_=ARRAY(Text)
)[2]
).label('organization_reading')
).subquery()

meter_readings = dbsession.query(
MeterReading.user_id,
MeterReading.reading
). \
distinct(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id). \
filter(
and_(
MeterReading.person_id == database.c.person_id,
MeterReading.service_id == database.c.service_id,
MeterReading.meter_id == database.c.meter_id,
MeterReading.commit_date > database.c.date,
MeterReading.rollback_date == None,
MeterReading.reading != None
)
). \
order_by(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id,
MeterReading.commit_date.desc(),
MeterReading.commit_time.desc()
).subquery().lateral()

q = dbsession.query(
meter_readings,
database
). \
select_from(
database.outerjoin(meter_readings, true())
).filter(database.c.person_id == person_id)

return q



The problem with extra "database" table name in select was because I use 
filter(Database.person_id == person_id) 

but not filter(database.c.person_id == person_id)



About LATERAL 

https://www.postgresql.org/docs/current/static/sql-select.html

The LATERAL key word can precede a sub-SELECT FROM item.

This allows the sub-SELECT to *refer to columns of FROM items **that appear 
before it* in the FROM list.

(Without LATERAL, each sub-SELECT is evaluated independently and so cannot 
cross-reference any other FROM item.)


So I refer columns that are in database query (first) from meter_readings query 
(second select)


Without LATERAL I get:

ERROR: invalid reference to FROM-clause entry for table "d" 

Hint: There is an entry for table "d", but it cannot be referenced from this 
part of the query.


And that is SQL query result:

\N  \N  2099000153759   75  952160  {{952160,1140,0}}   952160  
11402017-11-30
\N  \N  2099000153759   2   27852   {{219935,14768,0}}  219935  
14768   2017-11-30
\N  \N  2099000153759   4   206688  {{468805,57,0},{589164,73,0}}   
468805  57  2017-11-30
\N  \N  2099000153759   4   206688  {{468805,57,0},{589164,73,0}}   
589164  73  2017-11-30


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Extra table name in FROM clause

2018-01-04 Thread sector119
oops, .filter(database.c.person_id == person_id) fixed the problem.

Thanks!

четверг, 4 января 2018 г., 23:08:35 UTC+2 пользователь sector119 написал:
>
> Hello,
>
> In this query I get extra (unexpected) "databases" table name in FROM 
> clause
>
> q = dbsession.query(
> meter_readings.c.reading,
> database.c.service_id
> ). \
> select_from(
> database.outerjoin(meter_readings, true())
> ).filter(Database.person_id == person_id)
>
>
> Here is the query:
> SELECT ...
> FROM *databases*, (SELECT ... FROM databases) AS anon_2 LEFT OUTER JOIN 
> LATERAL (SELECT ... FROM meter_readings) AS anon_1 ON true WHERE 
> databases.person_id = 123
>
> But I expect to get it without *databases *table like
> SELECT ... FROM (SELECT ... FROM databases) AS anon_2 LEFT OUTER JOIN 
> LATERAL (SELECT ... FROM meter_readings) AS anon_1 ON true WHERE 
> databases.person_id = 123
>
>
> What am I doing wrong?
>
>
>
> database = dbsession.query(
> Database.person_id,
> Database.service_id,
> Database.person_id_internal,
> Database.date,
> (
> type_coerce(
> Database.meters_readings[
> func.generate_subscripts(Database.meters_readings, 1)
> ],
> type_=ARRAY(Text)
> )[1]
> ).label('meter_id'),
> (
> type_coerce(
> Database.meters_readings[
> func.generate_subscripts(Database.meters_readings, 1)
> ],
> type_=ARRAY(Text)
> )[2]
> ).label('organization_reading')
> ).subquery()
>
> meter_readings = dbsession.query(
> MeterReading.user_id,
> MeterReading.reading
> ). \
> distinct(
> MeterReading.service_id,
> MeterReading.person_id_internal,
> MeterReading.meter_id). \
> filter(
> and_(
> MeterReading.person_id == database.c.person_id,
> MeterReading.service_id == database.c.service_id,
> MeterReading.meter_id == database.c.meter_id,
> MeterReading.commit_date > database.c.date,
> MeterReading.rollback_date == None,
> MeterReading.reading != None
> )
> ). \
> order_by(
> MeterReading.service_id,
> MeterReading.person_id_internal,
> MeterReading.meter_id,
> MeterReading.commit_date.desc(),
> MeterReading.commit_time.desc()
> ).subquery().lateral()
>
>
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Extra table name in FROM clause

2018-01-04 Thread sector119
Hello,

In this query I get extra (unexpected) "databases" table name in FROM clause

q = dbsession.query(
meter_readings.c.reading,
database.c.service_id
). \
select_from(
database.outerjoin(meter_readings, true())
).filter(Database.person_id == person_id)


Here is the query:
SELECT ...
FROM *databases*, (SELECT ... FROM databases) AS anon_2 LEFT OUTER JOIN 
LATERAL (SELECT ... FROM meter_readings) AS anon_1 ON true WHERE 
databases.person_id = 123

But I expect to get it without *databases *table like
SELECT ... FROM (SELECT ... FROM databases) AS anon_2 LEFT OUTER JOIN 
LATERAL (SELECT ... FROM meter_readings) AS anon_1 ON true WHERE 
databases.person_id = 123


What am I doing wrong?



database = dbsession.query(
Database.person_id,
Database.service_id,
Database.person_id_internal,
Database.date,
(
type_coerce(
Database.meters_readings[
func.generate_subscripts(Database.meters_readings, 1)
],
type_=ARRAY(Text)
)[1]
).label('meter_id'),
(
type_coerce(
Database.meters_readings[
func.generate_subscripts(Database.meters_readings, 1)
],
type_=ARRAY(Text)
)[2]
).label('organization_reading')
).subquery()

meter_readings = dbsession.query(
MeterReading.user_id,
MeterReading.reading
). \
distinct(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id). \
filter(
and_(
MeterReading.person_id == database.c.person_id,
MeterReading.service_id == database.c.service_id,
MeterReading.meter_id == database.c.meter_id,
MeterReading.commit_date > database.c.date,
MeterReading.rollback_date == None,
MeterReading.reading != None
)
). \
order_by(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id,
MeterReading.commit_date.desc(),
MeterReading.commit_time.desc()
).subquery().lateral()


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] NotImplementedError: Operator 'getitem' is not supported on this expression

2018-01-04 Thread sector119
Thank you, Mike, but I need something like this


type_coerce(

Database.meters_readings[

func.generate_subscripts(Database.meters_readings, 1)

], type_=ARRAY(Text)

)[1]


Database.meters_readings is two-dimensional array like [ ['1', '2', '3'], ['4', 
'5', '6'] ], so

Database.meters_readings[func.generate_subscripts.] must return ['1', '2', 
'3'] or ['4', '5', '6'],

and than I want to get 1-st or second element of this "sub-array"


func.generate_subscripts have to return int and it is Ok, but why I can't get 
that "sub-array" without type_coerce? 



четверг, 4 января 2018 г., 17:48:56 UTC+2 пользователь Mike Bayer написал:
>
> On Thu, Jan 4, 2018 at 6:49 AM, sector119 <sect...@gmail.com > 
> wrote: 
> > Hello, 
> > 
> > 
> > I try to rewrite "meters_readings[generate_subscripts(meters_readings, 
> > 1)][1]" from plain sql query that works 
> > 
> > with sqlalchemy and 
> > 
> Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
> > 1)][1] doesn't work 
>
>
> your func.generate_subscripts() function doesn't know that it needs to 
> return a datatype that fulfills the "Indexable" contract, that of 
> ARRAY, JSON and HSTORE (though PG documents this as returning "setof", 
> that supports indexing?  shrugs).  You need to give it the type you 
> need, which I guess in this case the closest is ARRAY: 
>
> >>> f = func.generate_subscripts(column('q'), 1, type_=ARRAY(Integer)) 
> >>> print f[5] 
> (generate_subscripts(q, :generate_subscripts_1))[:generate_subscripts_2] 
>
>
>
>
> > 
> > I get NotImplementedError: Operator 'getitem' is not supported on this 
> > expression 
> > 
> > 
> > Is it possible? 
> > 
> > 
> > 
> > def get_person_meters_q(dbsession, person_id): 
> > database = dbsession.query( 
> > Database.person_id, 
> > Database.service_id, 
> > Database.person_id_internal, 
> > Database.meters_readings, 
> > 
> > 
> (Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
>
> > 1)][1]).label('meter_id'), 
> > 
> > 
> (Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
>
> > 1)][2]).label('organization_reading'), 
> > Database.date 
> > ).subquery() 
> > 
> > meter_readings = dbsession.query( 
> > MeterReading.user_id, 
> > MeterReading.reading 
> > ). \ 
> > distinct( 
> > MeterReading.service_id, 
> > MeterReading.person_id_internal, 
> > MeterReading.meter_id). \ 
> > filter( 
> > and_( 
> > MeterReading.person_id == database.c.person_id, 
> > MeterReading.service_id == database.c.service_id, 
> > MeterReading.meter_id == database.c.meter_id, 
> > MeterReading.commit_date > database.c.date, 
> > MeterReading.rollback_date == None, 
> > MeterReading.reading != None 
> > ) 
> > ). \ 
> > order_by( 
> > MeterReading.service_id, 
> > MeterReading.person_id_internal, 
> > MeterReading.meter_id, 
> > MeterReading.commit_date.desc(), 
> > MeterReading.commit_time.desc() 
> > ).subquery().lateral() 
> > 
> > q = dbsession.query( 
> > meter_readings, 
> > database 
> > ). \ 
> > select_from(database). \ 
> > outerjoin(meter_readings, true()). \ 
> > filter(Database.person_id == person_id) 
> > 
> > return q 
> > 
> > 
> > 
> >   File 
> "/home/sector119/PycharmProjects/epsilon/epsilon/scripts/testdb.py", 
> > line 98, in get_person_meters_q 
> > 
> > 
> (Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
>
> > 1)][1]).label('meter_id'), 
> >   File 
> > 
> "/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/operators.py",
>  
>
> > line 411, in __getitem__ 
> > return self.operate(getitem, index) 
> >   File 
> > 
> "/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/elements.py",
>  
>
> > line 692, in operate 
> > return op(self.comparator, *other, **kwargs) 
> >   File 
> > 
> &quo

[sqlalchemy] NotImplementedError: Operator 'getitem' is not supported on this expression

2018-01-04 Thread sector119


Hello,


I try to rewrite "meters_readings[generate_subscripts(meters_readings, 1)][1]" 
from plain sql query that works 

with sqlalchemy and 
Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
1)][1] doesn't work

I get NotImplementedError: Operator 'getitem' is not supported on this 
expression


Is it possible?



def get_person_meters_q(dbsession, person_id):
database = dbsession.query(
Database.person_id,
Database.service_id,
Database.person_id_internal,
Database.meters_readings,

(Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
1)][1]).label('meter_id'),

(Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
1)][2]).label('organization_reading'),
Database.date
).subquery()

meter_readings = dbsession.query(
MeterReading.user_id,
MeterReading.reading
). \
distinct(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id). \
filter(
and_(
MeterReading.person_id == database.c.person_id,
MeterReading.service_id == database.c.service_id,
MeterReading.meter_id == database.c.meter_id,
MeterReading.commit_date > database.c.date,
MeterReading.rollback_date == None,
MeterReading.reading != None
)
). \
order_by(
MeterReading.service_id,
MeterReading.person_id_internal,
MeterReading.meter_id,
MeterReading.commit_date.desc(),
MeterReading.commit_time.desc()
).subquery().lateral()

q = dbsession.query(
meter_readings,
database
). \
select_from(database). \
outerjoin(meter_readings, true()). \
filter(Database.person_id == person_id)

return q



  File "/home/sector119/PycharmProjects/epsilon/epsilon/scripts/testdb.py", 
line 98, in get_person_meters_q

(Database.meters_readings[func.generate_subscripts(Database.meters_readings, 
1)][1]).label('meter_id'),
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/operators.py",
 
line 411, in __getitem__
return self.operate(getitem, index)
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/elements.py",
 
line 692, in operate
return op(self.comparator, *other, **kwargs)
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/operators.py",
 
line 411, in __getitem__
return self.operate(getitem, index)
  File "", line 1, in 
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/type_api.py",
 
line 63, in operate
    return o[0](self.expr, op, *(other + o[1:]), **kwargs)
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/default_comparator.py",
 
line 192, in _getitem_impl
_unsupported_impl(expr, op, other, **kw)
  File 
"/home/sector119/PythonVirtualEnv/epsilon/lib/python3.5/site-packages/sqlalchemy/sql/default_comparator.py",
 
line 197, in _unsupported_impl
"this expression" % op.__name__)
NotImplementedError: Operator 'getitem' is not supported on this expression

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] update instance.relation.attr in instance.attr "set event listener"

2016-03-23 Thread sector119
Going back to the subject ))

I get error (and no data updated) while using no_autoflush

/Users/sector119/PythonVirtualEnv/supplements/lib/python2.7/site-packages/sqlalchemy/orm/session.py:2122:
 
SAWarning: Attribute history events accumulated on 1 previously clean 
instances within inner-flush event handlers have been reset, and will not 
result in database updates. Consider using set_committed_value() within 
inner-flush event handlers to avoid this warning.

def reviews_after_insert_listener(mapper, connection, target):
session = object_session(target)

with session.no_autoflush:
print repr(target.overview)
target.overview.reviews_count += 1

target.overview.rating_sum += target.overview_rating
target.overview.rating_count += 1

if target.overview_flavor_rating is not None:
target.overview_flavor.rating_sum += 
target.overview_flavor_rating
target.overview_flavor.rating_count += 1


event.listen(ProductOverviewReview, 'after_insert', 
reviews_after_insert_listener)


четверг, 25 февраля 2016 г., 15:41:03 UTC+2 пользователь Simon King написал:
>
> Er, ok. There are simpler ways to avoid autoflush
>
> https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/DisableAutoflush
>
> for example:
>
> session = sqlalchemy.orm.object_session(mapper)
> with session.no_autoflush:
> target.product.quantity += (value - oldvalue)
>
> ...but that still doesn't fix your underlying problem, which is that you 
> are assigning an unexpected object to your "label" property. I guess 
> FieldStorage comes from your web framework, and you need to extract the 
> actual value from that before assigning it to your mapped object.
>
> Simon
>
> On Thu, Feb 25, 2016 at 1:24 PM, sector119 <sect...@gmail.com 
> > wrote:
>
>> Thanks a lot, I add 
>>
>> @event.listens_for(ProductFlavor, 'after_update')
>> def quantity_before_update_listener(mapper, connection, target):
>> quantity = 
>> select([func.coalesce(func.sum(ProductFlavor.__table__.c.quantity), 
>> 0)]).where(
>> ProductFlavor.__table__.c.product_id == Product.__table__.c.id)
>> connection.execute(
>> Product.__table__.update().where(Product.__table__.c.id == 
>> target.product_id).values(quantity=quantity))
>>
>>
>> четверг, 25 февраля 2016 г., 11:52:50 UTC+2 пользователь Simon King 
>> написал:
>>>
>>> On Wed, Feb 24, 2016 at 9:51 PM, sector119 <sect...@gmail.com> wrote:
>>>
>>>> Hello!
>>>>
>>>> I have two models, Product and ProductFlavor with one-to-many 
>>>> relationship
>>>> And I have a listener, which I want to update Product.quantity on 
>>>> ProductFlavor.quantity change:
>>>>
>>>> @event.listens_for(ProductFlavor.quantity, 'set')
>>>> def quantity_set(target, value, oldvalue, initiator):
>>>> if value != oldvalue:
>>>> target.product.quantity += (value - oldvalue)
>>>>
>>>>
>>>> But I get the following error:
>>>>
>>>>
>>>> ProgrammingError: (raised as a result of Query-invoked autoflush; 
>>>> consider using a session.no_autoflush block if this flush is occurring 
>>>> prematurely) 
>>>>
>>>> (psycopg2.ProgrammingError) can't adapt type 'instance' [SQL: 'UPDATE 
>>>> product_flavor SET label=%(label)s WHERE product_flavor.id = 
>>>> %(product_flavor_id)s'] [parameters: {'product_flavor_id': 4, 'label': 
>>>> FieldStorage('label', u'42bbebd1f7ba46b58d3d4b794b4b890e.png')}]
>>>>
>>>> What I'm doing wrong?
>>>>
>>>
>>> It looks like you're assigning a non-string to your "label" column. This 
>>> isn't directly related to your attribute listener - the error would happen 
>>> even without the attribute listener when you called session.flush() or 
>>> session.commit(). The attribute listener is just causing the flush to 
>>> happen earlier presumably because "target.product" has not yet been loaded 
>>> from the database.
>>>
>>> Simon
>>>
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to sqlalchemy+...@googlegroups.com .
>> To post to this group, send email to sqlal...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] need help with query

2016-03-01 Thread sector119
With that query I get:
InvalidRequestError: Could not find a FROM clause to join from.  Tried 
joining to , but got: Can't find any 
foreign key relationships between 'category' and 'product'.

Product and category model has many to many relationship

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Need help with update + select query

2016-02-25 Thread sector119


product_t = Product.__table__
product_flavor_t = ProductFlavor.__table__
product_t_a = product_t.alias()

op.add_column(u'product', sa.Column('servings', sa.Integer(), nullable=True))
op.add_column(u'product', sa.Column('flavors_count', sa.Integer(), 
nullable=True))

servings = select([func.coalesce(func.avg(product_flavor_t.c.size), 0)]).\
   select_from(product_t_a.outerjoin(product_flavor_t)).\
   where(product_t.c.id == product_t_a.c.id)

flavors_count = select([func.count(product_flavor_t.c.id)]).\
select_from(product_t_a.outerjoin(product_flavor_t)).\
where(product_t.c.id == product_t_a.c.id)

op.execute(product_t.update().values(servings=servings, 
flavors_count=flavors_count))


op.alter_column(u'product', 'servings', nullable=False)
op.alter_column(u'product', 'flavors_count', nullable=False)

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Need help with update + select query

2016-02-25 Thread sector119
Postgresql doesn't support this yet, so I have to use separate query all 
aggregates (( 

 SET (servings, flavors_count) = (
SELECT coalesce(avg(f.size), 0),
   count(f.id)
FROM product p LEFT OUTER JOIN product_flavor f ON p.id = f.product_id
WHERE product.id = p.id

четверг, 25 февраля 2016 г., 15:19:11 UTC+2 пользователь sector119 написал:
>
> Hello,
>
> Can some one help me with that query? I get AttributeError: servings
> I expect that sqlalchemy use update from select for that query or it's not 
> possible and I must use select(...).as_scalar() for every updated column?
>
> s = 
> select([func.coalesce(func.avg(product_flavor_t.c.size).label('servings'), 
> 0),
> func.count().label('flavors_count')]).\
> where(and_(product_flavor_t.c.product_id == product_t.c.id, 
> product_flavor_t.c.quantity > 0))
>
> op.execute(product_t.update().values(servings=s.c.servings, 
> flavors_count=s.c.flavors_count))
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Need help with update + select query

2016-02-25 Thread sector119


s = select([func.coalesce(func.avg(product_flavor_t.c.size), 0).label('f1'),
func.count(product_flavor_t.c.id).label('f2')]).\
select_from(product_t.outerjoin(product_flavor_t))
op.execute(product_t.update().values(servings=s.c.f1, flavors_count=s.c.f2))


I tried that, but it doesn't work as expected ((


I got UPDATE without SELECT :

sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) column "f1" does 
not exist
LINE 1: UPDATE product SET servings=f1, flavors_count=f2



And PostgreSQL log file contains:


statement: ALTER TABLE product ADD COLUMN servings INTEGER
statement: ALTER TABLE product ADD COLUMN flavors_count INTEGER


statement: UPDATE product SET servings=f1, flavors_count=f2
ERROR:  column "f1" does not exist at character 29
STATEMENT:  UPDATE product SET servings=f1, flavors_count=f2

statement: ROLLBACK


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] update instance.relation.attr in instance.attr "set event listener"

2016-02-25 Thread sector119
Thanks a lot, I add 

@event.listens_for(ProductFlavor, 'after_update')
def quantity_before_update_listener(mapper, connection, target):
quantity = 
select([func.coalesce(func.sum(ProductFlavor.__table__.c.quantity), 
0)]).where(
ProductFlavor.__table__.c.product_id == Product.__table__.c.id)
connection.execute(
Product.__table__.update().where(Product.__table__.c.id == 
target.product_id).values(quantity=quantity))


четверг, 25 февраля 2016 г., 11:52:50 UTC+2 пользователь Simon King написал:
>
> On Wed, Feb 24, 2016 at 9:51 PM, sector119 <sect...@gmail.com 
> > wrote:
>
>> Hello!
>>
>> I have two models, Product and ProductFlavor with one-to-many relationship
>> And I have a listener, which I want to update Product.quantity on 
>> ProductFlavor.quantity change:
>>
>> @event.listens_for(ProductFlavor.quantity, 'set')
>> def quantity_set(target, value, oldvalue, initiator):
>> if value != oldvalue:
>> target.product.quantity += (value - oldvalue)
>>
>>
>> But I get the following error:
>>
>>
>> ProgrammingError: (raised as a result of Query-invoked autoflush; 
>> consider using a session.no_autoflush block if this flush is occurring 
>> prematurely) 
>>
>> (psycopg2.ProgrammingError) can't adapt type 'instance' [SQL: 'UPDATE 
>> product_flavor SET label=%(label)s WHERE product_flavor.id = 
>> %(product_flavor_id)s'] [parameters: {'product_flavor_id': 4, 'label': 
>> FieldStorage('label', u'42bbebd1f7ba46b58d3d4b794b4b890e.png')}]
>>
>> What I'm doing wrong?
>>
>
> It looks like you're assigning a non-string to your "label" column. This 
> isn't directly related to your attribute listener - the error would happen 
> even without the attribute listener when you called session.flush() or 
> session.commit(). The attribute listener is just causing the flush to 
> happen earlier presumably because "target.product" has not yet been loaded 
> from the database.
>
> Simon
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Need help with update + select query

2016-02-25 Thread sector119
Hello,

Can some one help me with that query? I get AttributeError: servings
I expect that sqlalchemy use update from select for that query or it's not 
possible and I must use select(...).as_scalar() for every updated column?

s = 
select([func.coalesce(func.avg(product_flavor_t.c.size).label('servings'), 
0),
func.count().label('flavors_count')]).\
where(and_(product_flavor_t.c.product_id == product_t.c.id, 
product_flavor_t.c.quantity > 0))

op.execute(product_t.update().values(servings=s.c.servings, 
flavors_count=s.c.flavors_count))

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] update instance.relation.attr in instance.attr "set event listener"

2016-02-24 Thread sector119
Hello!

I have two models, Product and ProductFlavor with one-to-many relationship
And I have a listener, which I want to update Product.quantity on 
ProductFlavor.quantity change:

@event.listens_for(ProductFlavor.quantity, 'set')
def quantity_set(target, value, oldvalue, initiator):
if value != oldvalue:
target.product.quantity += (value - oldvalue)


But I get the following error:


ProgrammingError: (raised as a result of Query-invoked autoflush; consider 
using a session.no_autoflush block if this flush is occurring prematurely) 

(psycopg2.ProgrammingError) can't adapt type 'instance' [SQL: 'UPDATE 
product_flavor SET label=%(label)s WHERE product_flavor.id = 
%(product_flavor_id)s'] [parameters: {'product_flavor_id': 4, 'label': 
FieldStorage('label', u'42bbebd1f7ba46b58d3d4b794b4b890e.png')}]

What I'm doing wrong?

Thank You!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: How to jsonify Query result?

2015-07-15 Thread sector119
Ok, that approach isn't cool.. I get another one from flask_jsontools and 
it does what I need!


json.dumps(q, cls=DynamicJSONEncoder)


Base = declarative_base(cls=JsonSerializableBase)


import decimal
from datetime import datetime, date
from json import JSONEncoder

from sqlalchemy import inspect
from sqlalchemy.orm.state import InstanceState


class DynamicJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime, date)):
return obj.isoformat()
elif isinstance(obj, decimal.Decimal):
return float(obj)
elif hasattr(obj, '__json__'):
return obj.__json__()

return super(DynamicJSONEncoder, self).default(obj)


def get_entity_propnames(entity):
ins = entity if isinstance(entity, InstanceState) else inspect(entity)
return set(ins.mapper.column_attrs.keys() + ins.mapper.relationships.keys())


def get_entity_loaded_propnames(entity):
 Get entity property names that are loaded (e.g. won't produce new 
queries)
ins = inspect(entity)
keynames = get_entity_propnames(ins)

# If the entity is not transient -- exclude unloaded keys
# Transient entities won't load these anyway, so it's safe to include all 
columns and get defaults
if not ins.transient:
keynames -= ins.unloaded

# If the entity is expired -- reload expired attributes as well
# Expired attributes are usually unloaded as well!
if ins.expired:
keynames |= ins.expired_attributes

return keynames


class JsonSerializableBase(object):
 Declarative Base mixin to allow objects serialization

__json_private__ = set()

def __json__(self):
return {name: getattr(self, name)
for name in get_entity_loaded_propnames(self) - 
self.__json_private__}



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: How to jsonify Query result?

2015-07-14 Thread sector119


I create Serializer class and use it for all my models with: Base = 
declarative_base(cls=Serializer)


But I have one problem! I have Organization model and Service model that has 
organization = relationship('Organization', backref='services') relationship 
and I get RuntimeError: maximum recursion depth exceeded while calling a 
Python object


What can I do with that? Help me please!


Traceback (most recent call last):
  File s.py, line 10, in module
print repr(r.to_dict())
  File /Users/sector119/PycharmProjects/epsilon/epsilon/models/serializer.py, 
line 31, in to_dict
value = self._serialize(key, field.value)
  File /Users/sector119/PycharmProjects/epsilon/epsilon/models/serializer.py, 
line 48, in _serialize
result.append(cls._serialize(key, v))
  File /Users/sector119/PycharmProjects/epsilon/epsilon/models/serializer.py, 
line 51, in _serialize
result = value.to_dict()
  File /Users/sector119/PycharmProjects/epsilon/epsilon/models/serializer.py, 
line 31, in to_dict

...
...
...
  File 
/Users/sector119/PythonVirtualEnv/epsilon/lib/python2.7/site-packages/sqlalchemy/orm/state.py,
 
line 671, in value
self.state.obj(), self.state.class_)
  File 
/Users/sector119/PythonVirtualEnv/epsilon/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py,
 
line 233, in __get__
dict_ = instance_dict(instance)
RuntimeError: maximum recursion depth exceeded while calling a Python object



from datetime import datetime, date

from sqlalchemy.inspection import inspect


class Serializer(object):
__public__ = None
__private__ = None

def to_dict(self, exclude=(), extra=()):
data = {}
items = inspect(self).attrs.items()

if self.__public__:
public = self.__public__ + extra
else:
public = extra

if self.__private__:
private = self.__private__ + exclude
else:
private = exclude

for key, field in items:
if private and key in private:
continue

if public and key not in public:
continue

value = self._serialize(key, field.value)
if value:
data[key] = value

return data

@classmethod
def _serialize(cls, key, value):
print 'Field name: %s' % key

if isinstance(value, (datetime, date)):
print 'Type date'
result = value.isoformat()
elif hasattr(value, '__iter__'):
print 'Type iterable'
result = []
for v in value:
result.append(cls._serialize(key, v))
elif isinstance(value, Serializer):
print 'Type Serializer'
result = value.to_dict()
else:
print 'Type other'
result = value

print 'Result %r\n' % result
return result




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to jsonify Query result?

2015-07-11 Thread sector119
Hello,

How can we jsonify the tuples returned by Query ?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Can't create table with ENUM type column

2015-07-05 Thread sector119
Column(ENUM('copy', 'insert', 'update', name='import_action')) *works*
Column(ARRAY(ENUM('copy', 'insert', 'update', name='import_action'))) 
*doesn't create enum type *

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Can't create table with ENUM type column

2015-07-05 Thread sector119
Code below creates enum type, but server_default still doesn't work as I 
expect ))


import_action = ENUM('copy', 'insert', 'update', name='import_action')

class Import(Base):
__tablename__ = 'import'

id = Column(Integer, primary_key=True)
name = Column(UnicodeText, nullable=False)
actions = Column(ARRAY(import_action), server_default=array(['copy'], 
type_=import_action), nullable=False)

@event.listens_for(Import.__table__, 'before_create')
def receive_before_create(target, connection, **kwargs):
import_action.create(connection)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can't create table with ENUM type column

2015-07-05 Thread sector119
Hello,

Help me create table with enum type column, SA do not create enum type 
before table :(
And server_default doesn't set value that I expect, actions 
import_action[] DEFAULT ARRAY['copy'] NOT NULL it set array['copy'], not 
import_action['copy'] as default...

Code and exception:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column
from sqlalchemy import Integer, UnicodeText
from sqlalchemy.dialects.postgresql import ARRAY, array, ENUM


engine = 
create_engine('postgresql+psycopg2://user:password@127.0.0.1:5432/epsilon', 
echo=True)
Base = declarative_base()
DBSession = sessionmaker(bind=engine)()
Base.metadata.bind = engine

import_action = ENUM('copy', 'insert', 'update', name='import_action', 
metadata=Base.metadata)

class Import(Base):
__tablename__ = 'import'

id = Column(Integer, primary_key=True)
name = Column(UnicodeText, nullable=False)
actions = Column(ARRAY(import_action), server_default=array(['copy'], 
type_=import_action), nullable=False)

Import.__table__.create(checkfirst=True)


sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type 
import_action[] does not exist
LINE 5:  actions import_action[] DEFAULT ARRAY['copy'] NOT NULL, 
 ^
 [SQL: \nCREATE TABLE import (\n\tid SERIAL NOT NULL, \n\tname TEXT NOT 
NULL, \n\tactions import_action[] DEFAULT ARRAY['copy'] NOT NULL, 
\n\tPRIMARY KEY (id)\n)\n\n]

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Can't create table with ENUM type column

2015-07-05 Thread sector119
Thanks a lot for your help, Michael !

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can't create table in just created schema

2015-06-28 Thread sector119


Hello!


I can't create table in just created schema without commit() after CreateSchema 
statement, code and traceback posted below.

Is it Ok? Or I make something wrong? 


PS. Can I call CreateSchema with .execute_if(callable_=should_create) ?



from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import CreateSchema

from epsilon.models import Base, SYSTEM_SCHEMA, SYSTEM_MODELS


engine = 
create_engine('postgresql+psycopg2://USER:PASSWORD@127.0.0.1:5432/epsilon', 
echo=True)
DBSession = sessionmaker(bind=engine)()
Base.metadata.bind = engine

def create_schema(name):
s = CreateSchema(name)
DBSession.execute(s)
#DBSession.commit() // IF I uncomment this line, everything work fine

try:
create_schema(SYSTEM_SCHEMA)

DBSession.execute('SET search_path TO %s' % SYSTEM_SCHEMA)
for m in SYSTEM_MODELS:
m.__table__.create(checkfirst=True)
DBSession.commit()
except:
DBSession.rollback()
raise




*2015-06-28 20:39:04,633 INFO sqlalchemy.engine.base.Engine select version()
2015-06-28 20:39:04,633 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,634 INFO sqlalchemy.engine.base.Engine select 
current_schema()
2015-06-28 20:39:04,635 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,636 INFO sqlalchemy.engine.base.Engine SELECT CAST('test 
plain returns' AS VARCHAR(60)) AS anon_1
2015-06-28 20:39:04,636 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,637 INFO sqlalchemy.engine.base.Engine SELECT CAST('test 
unicode returns' AS VARCHAR(60)) AS anon_1
2015-06-28 20:39:04,638 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,639 INFO sqlalchemy.engine.base.Engine show 
standard_conforming_strings
2015-06-28 20:39:04,639 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,640 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2015-06-28 20:39:04,640 INFO sqlalchemy.engine.base.Engine CREATE SCHEMA system
2015-06-28 20:39:04,640 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,641 INFO sqlalchemy.engine.base.Engine SET search_path TO 
system
2015-06-28 20:39:04,641 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,650 INFO sqlalchemy.engine.base.Engine select relname from 
pg_class c join pg_namespace n on n.oid=c.relnamespace where 
n.nspname=%(schema)s and relname=%(name)s
2015-06-28 20:39:04,650 INFO sqlalchemy.engine.base.Engine {'name': 
u'locality', 'schema': u'system'}
2015-06-28 20:39:04,654 INFO sqlalchemy.engine.base.Engine 
CREATE TABLE system.locality (
id SERIAL NOT NULL, 
parent_id INTEGER, 
name TEXT NOT NULL, 
type INTEGER NOT NULL, 
schema VARCHAR(63) NOT NULL, 
PRIMARY KEY (id), 
FOREIGN KEY(parent_id) REFERENCES system.locality (id), 
UNIQUE (schema)
)


2015-06-28 20:39:04,654 INFO sqlalchemy.engine.base.Engine {}
2015-06-28 20:39:04,654 INFO sqlalchemy.engine.base.Engine ROLLBACK
2015-06-28 20:39:04,655 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):*

*...*

*...*

*...*

*sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) schema system 
does not exist*


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to create model with dynamically set schema?

2015-02-05 Thread sector119
Michael, I want to set search_path to locality schema that depends on user 
settings.

And I have some kind of admin utility where user can setup locality environment 
(schema) with set of tables and I have the list of models those should be 
created at some schema that admin-user would specify..

How can I create those list of Models at some table?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Can't access model object backref

2013-01-25 Thread sector119
Hello.

I have Category model:

class Category(Base):
__tablename__ = 'categories'

id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('categories.id'))
name = Column(Unicode(255), nullable=False)
description = Column(UnicodeText)
position = Column(Integer)

children = relationship('Category', backref=backref('parent', 
remote_side=[id]), lazy='joined', join_depth=1, 
order_by='Category.position')

But I can't access it's 'parent' backref (I want to use it in order_by). 
Why?

 from whs.models import Category
 Category.parent
Traceback (most recent call last):
  File console, line 1, in module
AttributeError: type object 'Category' has no attribute 'parent'

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Re: Can't access model object backref

2013-01-25 Thread sector119
I want to order_by(Category.parent.name, Category.name) is it possible?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] column_property and class Mixin problem

2012-01-03 Thread sector119
Hello.

Can't get this to work, I want to get users who is online - users 
where last_read column = now() - 30 minutes  
With DBSession.query(User).filter(User.is_online) query

But get the following error:

  File /home/eps/devel/tourclub/pbb/pbb/models/__init__.py, line 147, in 
module
class User(UserMixin, Base):
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/ext/declarative.py, 
line 1273, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/ext/declarative.py, 
line 1078, in _as_declarative
column_copies[obj] = getattr(cls, name)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/ext/declarative.py, 
line 1480, in __get__
return desc.fget(cls)
  File /home/eps/devel/tourclub/pbb/pbb/models/__init__.py, line 143, in 
is_online
return column_property(case([(cls.last_read is not None, cls.last_read 
= func.now() - datetime.timedelta(minutes=30))], else_=False))
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/sql/expression.py, 
line 607, in case
return _Case(whens, value=value, else_=else_)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/sql/expression.py, 
line 3016, in __init__
_literal_as_binds(r)) for (c, r) in whens
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/sql/expression.py, 
line 1410, in _no_literals
bound value. % element)
sqlalchemy.exc.ArgumentError: Ambiguous literal: True.  Use the 'text()' 
function to indicate a SQL expression literal, or 'literal()' to indicate a 
bound value.


I want to get something like this but in SA...

SELECT * FROM user u WHERE (CASE WHEN u.last_read IS NULL THEN False ELSE 
u.last_read = now() - 30 * INTERVAL '1 minute' END) IS True;


I use next UserMixin class and User declarative model:

class UserMixin(object):
id = Column(Integer, primary_key=True)
username = Column(String(255), unique=True, nullable=False)
...
last_read = Column(DateTime)

@declared_attr
def is_online(cls):
return column_property(case([(cls.last_read is not None, 
cls.last_read = func.now() - datetime.timedelta(minutes=30))], 
else_=False))

class User(UserMixin, Base):
__tablename__ = 'user'

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/KUr2Iqu__x0J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] column_property and class Mixin problem

2012-01-03 Thread sector119
Now I get
 
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/sql/visitors.py, line 
59, in _compiler_dispatch
return getter(visitor)(self, **kw)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/sql/compiler.py, line 
370, in visit_column
raise exc.CompileError(Cannot compile Column object until 
CompileError: Cannot compile Column object until it's 'name' is assigned.

With

case([(cls.last_read != None, cls.last_read = func.now() - 
datetime.timedelta(minutes=30))], else_=literal(False)) 
case([(cls.last_read != None, literal(cls.last_read = func.now() - 
datetime.timedelta(minutes=30)))], else_=literal(False)) 
case([(cls.last_read != None, literal(cls.last_read = func.now() - 
datetime.timedelta(minutes=30)).label('foo'))], else_=literal(False))

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vRfXbta7DoMJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: column_property and class Mixin problem

2012-01-03 Thread sector119
Thanks a lot, Michael! Works like a charm!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/W6s3M_bzrRAJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: can't get delete-orphan work

2011-11-13 Thread sector119
Thanks a lot, Michael!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Vc5LM46bOiUJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] can't get delete-orphan work

2011-11-12 Thread sector119
Hi.

I think that I'm doing somethig wrong, but I can't get delete-orphah 
work... No DELETES on static_page_urls are performed...

2011-11-12 12:31:14 EET LOG:  statement: BEGIN
2011-11-12 12:31:14 EET LOG:  statement: DELETE FROM static_pages WHERE 
static_pages.id = 1
2011-11-12 12:31:14 EET ERROR:  update or delete on table static_pages 
violates foreign key constraint static_page_urls_page_id_fkey on table 
static_page_urls
2011-11-12 12:31:14 EET DETAIL:  Key (id)=(1) is still referenced from 
table static_page_urls.
2011-11-12 12:31:14 EET STATEMENT:  DELETE FROM static_pages WHERE 
static_pages.id = 1

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer, Unicode, UnicodeText

from sqlalchemy.orm import relationship


engine = 
create_engine('postgresql+psycopg2://USER:PASSWORD@127.0.0.1:5432/test', 
echo=True)

Session = sessionmaker()
Base = declarative_base()

Session.configure(bind=engine)
Base.metadata.bind = engine


class StaticPageUrl(Base):
__tablename__ = 'static_page_urls'

id = Column(Integer, primary_key=True)
title = Column(Unicode(255), nullable=False)
url = Column(UnicodeText, nullable=False)
page_id = Column(Integer, ForeignKey('static_pages.id'), nullable=False)


class StaticPage(Base):
__tablename__ = 'static_pages'

id = Column(Integer, primary_key=True)
title = Column(Unicode(255), nullable=False)
urls = relationship(StaticPageUrl, cascade=all, delete, delete-orphan)


Base.metadata.create_all(engine)

session = Session()
p = StaticPage(title='test page')
p.urls.append(StaticPageUrl(title='testurl', url='www.url.org'))
session.add(p)
session.flush()
session.commit()
session.close()

session = Session()
user = session.query(StaticPage).filter_by(id=1).delete()
session.add(report)
session.flush()
session.commit()
session.close()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/DOaTfEuV_bYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: after_insert mapper event: can't populate target.BACKREF.attr

2011-11-09 Thread sector119
# test_event.py

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer, UnicodeText
from sqlalchemy import event

from sqlalchemy.orm import relationship

engine = 
create_engine('postgresql+psycopg2://USER:PASSWORD@127.0.0.1:5432/test', 
echo=True)

Session = sessionmaker()
Base = declarative_base()

Session.configure(bind=engine)
Base.metadata.bind = engine

class Comment(Base):
__tablename__ = 'comment'
   
id = Column(Integer, primary_key=True)
content = Column(UnicodeText, nullable=False)
report_id = Column(Integer, ForeignKey('report.id'), nullable=False)

def comment_after_insert_listener(mapper, connection, target):
target.report.comments_count += 1

event.listen(Comment, 'after_insert', comment_after_insert_listener)

class Report(Base):
__tablename__ = 'report'
   
id = Column(Integer, primary_key=True)
content = Column(UnicodeText, nullable=False)
comments_count = Column(Integer, server_default='0', nullable=False)
   
comments = relationship('Comment', backref='report', cascade='all, 
delete-orphan', order_by='Comment.id')

Base.metadata.create_all(engine)

session = Session()

report = Report(content=u'test report')
session.add(report)
session.flush()
session.commit()

comment = Comment(content=u'test comment', report_id=1)
session.add(comment)
session.flush()
session.commit()   
  


% /home/eps/tourclub/bin/python test_event.py
2011-11-09 10:19:38,970 INFO sqlalchemy.engine.base.Engine select version()
2011-11-09 10:19:38,970 INFO sqlalchemy.engine.base.Engine {}
2011-11-09 10:19:38,972 INFO sqlalchemy.engine.base.Engine select 
current_schema()
2011-11-09 10:19:38,972 INFO sqlalchemy.engine.base.Engine {}
2011-11-09 10:19:38,974 INFO sqlalchemy.engine.base.Engine select relname 
from pg_class c join pg_namespace n on n.oid=c.relnamespace where 
n.nspname=current_schema() and relname=%(name)s
2011-11-09 10:19:38,974 INFO sqlalchemy.engine.base.Engine {'name': 
u'report'}
2011-11-09 10:19:38,976 INFO sqlalchemy.engine.base.Engine select relname 
from pg_class c join pg_namespace n on n.oid=c.relnamespace where 
n.nspname=current_schema() and relname=%(name)s
2011-11-09 10:19:38,976 INFO sqlalchemy.engine.base.Engine {'name': 
u'comment'}
2011-11-09 10:19:38,978 INFO sqlalchemy.engine.base.Engine
CREATE TABLE report (
id SERIAL NOT NULL,
content TEXT NOT NULL,
comments_count INTEGER DEFAULT '0' NOT NULL,
PRIMARY KEY (id)
) 
  
  
2011-11-09 10:19:38,978 INFO sqlalchemy.engine.base.Engine {}
2011-11-09 10:19:39,041 INFO sqlalchemy.engine.base.Engine COMMIT
2011-11-09 10:19:39,051 INFO sqlalchemy.engine.base.Engine
CREATE TABLE comment (
id SERIAL NOT NULL,
content TEXT NOT NULL,
report_id INTEGER NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(report_id) REFERENCES report (id)
) 
  
  
2011-11-09 10:19:39,051 INFO sqlalchemy.engine.base.Engine {}
2011-11-09 10:19:39,134 INFO sqlalchemy.engine.base.Engine COMMIT
2011-11-09 10:19:39,145 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2011-11-09 10:19:39,146 INFO sqlalchemy.engine.base.Engine INSERT INTO 
report (content) VALUES (%(content)s) RETURNING report.id
2011-11-09 10:19:39,146 INFO sqlalchemy.engine.base.Engine {'content': 
u'test report'}
2011-11-09 10:19:39,147 INFO sqlalchemy.engine.base.Engine COMMIT
2011-11-09 10:19:39,151 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2011-11-09 10:19:39,152 INFO sqlalchemy.engine.base.Engine INSERT INTO 
comment (content, report_id) VALUES (%(content)s, %(report_id)s) RETURNING 
comment.id
2011-11-09 10:19:39,152 INFO sqlalchemy.engine.base.Engine {'content': 
u'test comment', 'report_id': 1}
2011-11-09 10:19:39,154 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
  File test_event.py, line 54, in module
session.flush()
  File /home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/session.py, 
line 1547, in flush
self._flush(objects)
  File /home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/session.py, 
line 1616, in _flush
flush_context.execute()
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py, 
line 328, in execute
rec.execute(self)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/unitofwork.py, 
line 472, in execute
uow
  File /home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/mapper.py, 
line 2270, in _save_obj
mapper.dispatch.after_insert(mapper, connection, state)
  File /home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/event.py, line 
274, in __call__
fn(*args, **kw)
  File /home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/events.py, 
line 360, in wrap
wrapped_fn(*arg, **kw)
  File 

[sqlalchemy] Re: to many statements for collection.append?

2011-11-09 Thread sector119
Thanks, Michael, it was my mistake.

I use sqlalchemy with pyramid and comment = 
ReportComment(content=form.content.data, user=self.request.user)

where self.request.user is

class RequestFactory(Request):
@reify
def user(self):
db = DBSession()
username = unauthenticated_userid(self)
if username is not None:
return db.query(User).filter_by(username=username).first()

that is why I have 'broken' session :/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/TEhPf5UVzJcJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] to many statements for collection.append?

2011-11-09 Thread sector119
### TEST 1 STATEMENTS

2011-11-09 13:30:24 EET LOG:  statement: BEGIN
2011-11-09 13:30:24 EET LOG:  statement: SELECT report.id AS report_id, 
report.content AS report_content, report.comments_count AS 
report_comments_count, report.user_id AS report_user_id
FROM report
WHERE report.id = 1
2011-11-09 13:30:24 EET LOG:  statement: SELECT users.id AS users_id, 
users.username AS users_username
FROM users
WHERE users.username = 'sector119'
2011-11-09 13:30:24 EET LOG:  statement: SELECT comment.id AS comment_id, 
comment.content AS comment_content, comment.report_id AS comment_report_id, 
comment.user_id AS comment_user_id
FROM comment
WHERE 1 = comment.report_id ORDER BY comment.id
2011-11-09 13:30:24 EET LOG:  statement: INSERT INTO comment (content, 
report_id, user_id) VALUES ('test comment', 1, 1) RETURNING comment.id
2011-11-09 13:30:24 EET LOG:  statement: COMMIT



### TEST 2 STATEMENTS

2011-11-09 13:30:24 EET LOG:  statement: BEGIN
2011-11-09 13:30:24 EET LOG:  statement: SELECT report.id AS report_id, 
report.content AS report_content, report.comments_count AS 
report_comments_count, report.user_id AS report_user_id
FROM report
WHERE report.id = 1
2011-11-09 13:30:24 EET LOG:  statement: SELECT users.id AS users_id, 
users.username AS users_username
FROM users
WHERE users.username = 'sector119'
2011-11-09 13:30:24 EET LOG:  statement: INSERT INTO comment (content, 
report_id, user_id) VALUES ('test comment', NULL, 1) RETURNING comment.id
2011-11-09 13:30:24 EET ERROR:  null value in column report_id violates 
not-null constraint
2011-11-09 13:30:24 EET STATEMENT:  INSERT INTO comment (content, 
report_id, user_id) VALUES ('test comment', NULL, 1) RETURNING comment.id
2011-11-09 13:30:24 EET LOG:  statement: ROLLBACK



### TEST CASE


from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

from sqlalchemy import Column, ForeignKey
from sqlalchemy import Integer, String, UnicodeText
from sqlalchemy import event

from sqlalchemy.orm import relationship


engine = 
create_engine('postgresql+psycopg2://USER:PASSWORD@127.0.0.1:5432/test', 
echo=True)

Session = sessionmaker()
Base = declarative_base()

Session.configure(bind=engine)
Base.metadata.bind = engine


class User(Base):
__tablename__ = 'users'
   
id = Column(Integer, primary_key=True)
username = Column(String(255), unique=True, nullable=False)


class Comment(Base):
__tablename__ = 'comment'
   
id = Column(Integer, primary_key=True)
content = Column(UnicodeText, nullable=False)
report_id = Column(Integer, ForeignKey('report.id'), nullable=False)
user_id = Column(Integer, ForeignKey('users.id'), nullable=False)
   
user = relationship('User', backref='comments')


class Report(Base):
__tablename__ = 'report'
   
id = Column(Integer, primary_key=True)
content = Column(UnicodeText, nullable=False)
comments_count = Column(Integer, server_default='0', nullable=False)
user_id = Column(Integer, ForeignKey('users.id'), nullable=False)
   
user = relationship('User', backref='reports')
comments = relationship('Comment', backref='report', cascade='all, 
delete-orphan', order_by='Comment.id')


Base.metadata.create_all(engine)

session = Session()

user = User(username='sector119')
session.add(user)  
  session.flush()
session.commit()

session.close()


session = Session()

user = session.query(User).filter_by(username='sector119').one()
report = Report(content=u'test report', user=user)
session.add(report)
session.flush()
session.commit()

session.close()


session = Session()

### TEST 1

report = session.query(Report).get(1)
user = session.query(User).filter_by(username='sector119').one()
comment = Comment(content=u'test comment', user_id=user.id)
report.comments.append(comment)
session.add(report)
session.flush()
session.commit()

session.close()


### TEST 2
# HERE I GOT ERROR if I uncomment user = ... line

session = Session()

report = session.query(Report).get(1)
#user = session.query(User).filter_by(username='sector119').one()
comment = Comment(content=u'test comment', user=user)
report.comments.append(comment)
session.add(report)
session.flush()
session.commit()

session.close()



2011-11-09 13:22:10,270 INFO sqlalchemy.engine.base.Engine INSERT INTO 
comment (content, report_id, user_id) VALUES (%(content)s, %(report_id)s, 
%(user_id)s) RETURNING comment.id
2011-11-09 13:22:10,270 INFO sqlalchemy.engine.base.Engine {'content': 
u'test comment', 'user_id': 1, 'report_id': None}
2011-11-09 13:22:10,270 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
  File test_append.py, line 92, in module
report.comments.append(comment)
  File 
/home/eps/devel/tourclub/sqlalchemy/lib/sqlalchemy/orm/attributes.py, 
line 168, in __get__
return self.impl.get(instance_state

[sqlalchemy] after_insert mapper event: can't populate target.BACKREF.attr

2011-11-08 Thread sector119
Hello.

Why comments_after_insert_listener return None for target.report - backref 
for TripReport.comments?

class TripReportComment(Base):
__tablename__ = 'trip_report_comments'

id = Column(Integer, primary_key=True)
content = Column(UnicodeText, nullable=False)
report_id = Column(Integer, ForeignKey('trip_reports.id'), 
nullable=False)

def comments_after_insert_listener(mapper, connection, target):
print target.report

event.listen(TripReportComment, 'after_insert', 
comments_after_insert_listener)

class TripReport(Base):
__tablename__ = 'trip_reports'

id = Column(Integer, primary_key=True)
...
comments = relationship('TripReportComment', backref='report', 
cascade='all, delete-orphan', order_by='TripReportComment.id')


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/xlOoyFWOAZUJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] to many statements for collection.append?

2011-11-08 Thread sector119
Hi.

Why SA produce last UPDATE if I set report_id value?
And why it produces those two SELECTs if I do not perform read access to 
report and report.comments?
Should not it exec only one insert?

report = session.query(TripReport).get(id)
comment = TripReportComment(content=form.content.data, user=request.user, 
report_id=form.report_id.data)
report.comments.append(comment)
session.add(report)


SELECT trip_reports.id AS trip_reports_id, trip_reports.title AS 
trip_reports_ti
tle, trip_reports.content AS trip_reports_content, trip_reports.route AS 
trip_reports_route, trip_reports.date_start AS t
rip_reports_date_start, trip_reports.date_end AS trip_reports_date_end, 
trip_reports.type_id AS trip_reports_type_id, tri
p_reports.user_id AS trip_reports_user_id, trip_reports.schedule_id AS 
trip_reports_schedule_id, trip_reports.create_date
 AS trip_reports_create_date, trip_reports.comments_count AS 
trip_reports_comments_count
FROM trip_reports
WHERE trip_reports.id = '7'

INSERT INTO trip_report_comments (content, user_id, report_id) VALUES 
('test content', 6, '7') RETURNING trip_report_comments.id

SELECT trip_report_comments.id AS trip_report_comments_id, 
trip_report_comments.
content AS trip_report_comments_content, trip_report_comments.create_date 
AS trip_report_comments_create_date, trip_repor
t_comments.user_id AS trip_report_comments_user_id, 
trip_report_comments.report_id AS trip_report_comments_report_id
FROM trip_report_comments
WHERE 7 = trip_report_comments.report_id ORDER BY 
trip_report_comments.id

UPDATE trip_report_comments SET report_id=7 WHERE trip_report_comments.id = 
5

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/YKVkfJu5Q80J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: how to replace selectable but not with original table alias?

2011-10-13 Thread sector119
Thanks a lot, Michael. Really, I can make t = t1 or t = t2 and then
use t in select()

On 12 Жов, 16:46, Michael Bayer mike...@zzzcomputing.com wrote:
 replace_selectable can only swap in another selectable that's derived from 
 the original.   Else there's no way to correlate columns between each.  

 you can try doing it directly, where the column correlation is by name 
 (something SQLAlchemy doesn't ever do):

 from sqlalchemy.sql import visitors

 def replace(x):
     if x is t1:    # replace the table
         return t2
     elif t1.c.contains_column(x):  # replace columns in the table
         return t2.c[x.key]

 s = visitors.replacement_traverse(s, {}, replace)

 There also are compilation ways to do this, building a subclass of Alias 
 which compiles to the new name without the AS, this would be a little hacky.

 I also might be looking to adjust my application to not require a pattern 
 like this.   Here it would likely mean varying between t1 and t2 before 
 s is created.

 On Oct 12, 2011, at 8:32 AM, sector119 wrote:







  Hello.

  Is it possible to replace some table with defferent one?
  I want to get the same query, with the same columns, having, order_by,
  where clauses but with different table name.

  s.replace_selectable(t1, t1_alias) - works
  s.replace_selectable(t1, t2) - doesn't

  Thanks.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] how to replace selectable but not with original table alias?

2011-10-12 Thread sector119
Hello.

Is it possible to replace some table with defferent one?
I want to get the same query, with the same columns, having, order_by,
where clauses but with different table name.

s.replace_selectable(t1, t1_alias) - works
s.replace_selectable(t1, t2) - doesn't

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] why func.sum(some_table.c.bigintfield) returns Decimal ?

2011-09-26 Thread sector119
Hello.

Why func.sum(some_table.c.bigintfield) returns Decimal ?
documents_table.c.sum has BigInteger type (postgresql table field has
bigint type)

the same issue I got with postgresql date_part func, when I extract
year or month I got Decimal result, not int

 s = select([func.sum(documents_table.c.sum).label('payments_sum'), 
 func.sum(documents_table.c.payments).label('payments_count')])
 session.execute(s).fetchall()
2011-09-26 13:48:41,543 INFO sqlalchemy.engine.base.Engine SELECT
sum(documents.sum) AS payments_sum, sum(documents.payments) AS
payments_count
FROM documents
2011-09-26 13:48:41,543 INFO sqlalchemy.engine.base.Engine {}
[(Decimal('51788997139'), 8853396L)]

 s = select([cast(func.sum(documents_table.c.sum), 
 BigInteger).label('payments_sum'), 
 func.sum(documents_table.c.payments).label('payments_count')])
 session.execute(s).fetchall()
2011-09-26 13:48:56,243 INFO sqlalchemy.engine.base.Engine SELECT
CAST(sum(documents.sum) AS BIGINT) AS payments_sum,
sum(documents.payments) AS payments_count
FROM documents
2011-09-26 13:48:56,244 INFO sqlalchemy.engine.base.Engine {}
[(51788997139L, 8853396L)]

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] how to make obj copy with all relations?

2011-07-01 Thread sector119
Hello.

How to make a copy of object with all relations? But with new PK value
and save?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
from beaker import cache

from sqlalchemy import *
from sqlalchemy.orm import mapper, sessionmaker, scoped_session,
relationship
from sqlalchemy.types import *

# from examples/beaker_caching
from eps.model import caching_query


### INIT

cache_manager = cache.CacheManager()
metadata = MetaData()
engine = create_engine('postgresql+psycopg2://
LOGIN:passw...@127.0.0.1:5432/DBNAME', echo=False)
Session = scoped_session(sessionmaker(autoflush=True,
autocommit=False,
 
query_cls=caching_query.query_callable(cache_manager), bind=engine))

cache_manager.regions['default'] = {
'type': 'memory',
'lock_dir': '/tmp',
}

### END INIT


### TABLES

# groups

groups_table = Table('groups', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(255), unique=True, nullable=False)
)

class Group(object):
def __init__(self, name):
self.name = name

mapper(Group, groups_table)

# users

users_table = Table('users', metadata,
Column('id', Integer, primary_key=True),
Column('username', String(255), unique=True, nullable=False),
Column('first_name', Unicode(255), nullable=False),
Column('last_name', Unicode(255), nullable=False),
Column('middle_name', Unicode(255), nullable=False)
)


# users_groups

users_groups_table = Table('users_groups', metadata,
Column('user_id', Integer, ForeignKey('users.id')),
Column('group_id', Integer, ForeignKey('groups.id'))
)

class User(object):
def __init__(self, username, first_name, last_name, middle_name):
self.username = username
self.first_name = first_name
self.last_name = last_name
self.middle_name = middle_name

mapper(
User,
users_table,
properties={
'groups': relationship(Group, lazy=True,
secondary=users_groups_table, backref='users')
}
)

cache_user_relationships = caching_query.RelationshipCache('default',
'by_id', User.groups)

### END TABLES


### HELPERS

def get_user(username):
return Session.query(User).\
   options(cache_user_relationships).\
   options(caching_query.FromCache('default',
'by_username')).\
   filter_by(username=username).one()

def print_groups(user):
for g in user.groups:
print g.name

### END HELPERS


### CREATE

metadata.create_all(engine)

### END CREATE


### POPULATE

u1 = User('sector119', u'A', u'B', u'C')
u1.groups = [Group('G1')]
u2 = User('sector120', u'D', u'E', u'F')
u2.groups = [Group('G2')]
Session.add_all([u1, u2])

Session.commit()

### END POPULATE


### TEST ...

u = get_user('sector119')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

u = get_user('sector120')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
% python2.6 sacache.py
1. sector119 groups:
G1
2. sector119 groups:
G1
1. sector120 groups:
G1
2. sector120 groups:
G1

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
The same from dpaste.com: http://dpaste.com/289387/

It works with revision 6944:

% pwd
/home/eps/devel/src/sqlalchemy.6944

% python2.6 setup.py develop
running develop
...
Finished processing dependencies for SQLAlchemy==0.6.6dev

% hg log|head -n1
changeset:   6944:b29164cca942

% python2.6 sacache.py
1. sector119 groups:
G1
2. sector119 groups:
G1
1. sector120 groups:
G2
2. sector120 groups:
G2


Now I would try to move from 6944 to 7195 (current) changeset by
changeset and see where it would be broken.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-22 Thread sector119
Thank you, Michael!

On 22 Грд, 18:38, Michael Bayer mike...@zzzcomputing.com wrote:
 Thank you, please apply the change in r9327b3748997 to your 
 _params_from_query() function.

 On Dec 22, 2010, at 5:47 AM, sector119 wrote:







  from beaker import cache

  from sqlalchemy import *
  from sqlalchemy.orm import mapper, sessionmaker, scoped_session,
  relationship
  from sqlalchemy.types import *

  # from examples/beaker_caching
  from eps.model import caching_query

  ### INIT

  cache_manager = cache.CacheManager()
  metadata = MetaData()
  engine = create_engine('postgresql+psycopg2://
  LOGIN:passw...@127.0.0.1:5432/DBNAME', echo=False)
  Session = scoped_session(sessionmaker(autoflush=True,
  autocommit=False,

  query_cls=caching_query.query_callable(cache_manager), bind=engine))

  cache_manager.regions['default'] = {
     'type': 'memory',
     'lock_dir': '/tmp',
  }

  ### END INIT

  ### TABLES

  # groups

  groups_table = Table('groups', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', String(255), unique=True, nullable=False)
  )

  class Group(object):
     def __init__(self, name):
         self.name = name

  mapper(Group, groups_table)

  # users

  users_table = Table('users', metadata,
     Column('id', Integer, primary_key=True),
     Column('username', String(255), unique=True, nullable=False),
     Column('first_name', Unicode(255), nullable=False),
     Column('last_name', Unicode(255), nullable=False),
     Column('middle_name', Unicode(255), nullable=False)
  )

  # users_groups

  users_groups_table = Table('users_groups', metadata,
     Column('user_id', Integer, ForeignKey('users.id')),
     Column('group_id', Integer, ForeignKey('groups.id'))
  )

  class User(object):
     def __init__(self, username, first_name, last_name, middle_name):
         self.username = username
         self.first_name = first_name
         self.last_name = last_name
         self.middle_name = middle_name

  mapper(
     User,
     users_table,
     properties={
         'groups': relationship(Group, lazy=True,
  secondary=users_groups_table, backref='users')
     }
  )

  cache_user_relationships = caching_query.RelationshipCache('default',
  'by_id', User.groups)

  ### END TABLES

  ### HELPERS

  def get_user(username):
     return Session.query(User).\
                    options(cache_user_relationships).\
                    options(caching_query.FromCache('default',
  'by_username')).\
                    filter_by(username=username).one()

  def print_groups(user):
     for g in user.groups:
         print g.name

  ### END HELPERS

  ### CREATE

  metadata.create_all(engine)

  ### END CREATE

  ### POPULATE

  u1 = User('sector119', u'A', u'B', u'C')
  u1.groups = [Group('G1')]
  u2 = User('sector120', u'D', u'E', u'F')
  u2.groups = [Group('G2')]
  Session.add_all([u1, u2])

  Session.commit()

  ### END POPULATE

  ### TEST ...

  u = get_user('sector119')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  u = get_user('sector120')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] something wrong with relationship caching at _trunk_

2010-12-21 Thread sector119
Hello!

I have a problem with my relationship caching with 0.7b1 (current
trunk)

When I perform query on `User` model with `username` param, than
access some lazy and cached separaterly from main query relationship -
`groups`.
After that I exec the same query on `User` model with another
`username` param and access `groups` relationship I got the same
groups as with first query and no sql being executed to get those
groups...

The same code works correctly with 0.6.5.


To reproduce:

% sudo invoke-rc.d memcached restart
[sudo] password for sector119:
Restarting memcached: memcached.
%

# sacache.py

from sqlalchemy import create_engine

from eps.model import init_model
from eps.model import meta
from eps.model import caching_query as cache
from eps.model import cache_user_relationships
from eps.model import User
from eps.model import SYSTEM_SCHEMA


def get_user(username):
user = meta.Session.query(User).\
options(cache_user_relationships).\
options(cache.FromCache('default',
'by_username')).\
filter_by(username=username,
disabled=False).first()

meta.Session.connection().execute('SET search_path TO {0},
{1}'.format(SYSTEM_SCHEMA,
 
user.locality.schema))

return user

def print_groups(user):
for g in u.groups:
print g.name


engine = create_engine('postgresql+psycopg2://
eps:mypassw...@127.0.0.1:5432/eps', echo=True)
init_model(engine)


u = get_user('sector119')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

u = get_user('privat')
print '1. %s groups:' % u.username
print_groups(u)
print '2. %s groups:' % u.username
print_groups(u)

Output:

1. sector119 groups:
wheel
2. sector119 groups:
wheel

1. privat groups:
wheel
2. privat groups:
wheel


Echoing SQL:

2010-12-21 15:50:38,273 INFO sqlalchemy.engine.base.Engine select
version()
2010-12-21 15:50:38,274 INFO sqlalchemy.engine.base.Engine {}
2010-12-21 15:50:38,275 INFO sqlalchemy.engine.base.Engine select
current_schema()
2010-12-21 15:50:38,275 INFO sqlalchemy.engine.base.Engine {}
2010-12-21 15:50:38,276 INFO sqlalchemy.engine.base.Engine BEGIN
(implicit)

# FIRST user

2010-12-21 15:50:38,277 INFO sqlalchemy.engine.base.Engine SELECT
system.users.id AS system_users_id, system. users.username AS
system_users_username, system.users.password AS system_users_password,
system.users.first_name AS system_users_first_name,
system.users.last_name AS system_users_last_name, system.users.
middle_name AS system_users_middle_name, system.users.locality_id AS
system_users_locality_id, system.users.  office_id AS
system_users_office_id, system.users.email AS system_users_email,
system.users.create_date ASsystem_users_create_date,
system.users.last_login AS system_users_last_login,
system.users.expire AS  system_users_expire,
system.users.disabled AS system_users_disabled
FROM system.users
WHERE system.users.username = %(username_1)s AND system.users.disabled
= %(disabled_1)s
 LIMIT %(param_1)s OFFSET %(param_2)s
2010-12-21 15:50:38,277 INFO sqlalchemy.engine.base.Engine {'param_1':
1, 'disabled_1': False, 'username_1':  'sector119', 'param_2': 0}

2010-12-21 15:50:38,283 INFO sqlalchemy.engine.base.Engine SELECT
system.localities.id AS system_localities_id,
system.localities.name AS system_localities_name,
system.localities.type AS system_localities_type,
system.localities.schema AS system_localities_schema
FROM system.localities
WHERE system.localities.id = %(param_1)s
2010-12-21 15:50:38,283 INFO sqlalchemy.engine.base.Engine {'param_1':
1}

2010-12-21 15:50:38,285 INFO sqlalchemy.engine.base.Engine SET
search_path TO system,ternopil
2010-12-21 15:50:38,285 INFO sqlalchemy.engine.base.Engine {}

# FIRST user GROUPS

2010-12-21 15:50:38,286 INFO sqlalchemy.engine.base.Engine SELECT
system.groups.id AS system_groups_id,   system.groups.name AS
system_groups_name
FROM system.groups, system.users_groups
WHERE %(param_1)s = system.users_groups.user_id AND system.groups.id =
system.users_groups.group_id
2010-12-21 15:50:38,286 INFO sqlalchemy.engine.base.Engine {'param_1':
2}

# NEXT user

2010-12-21 15:50:38,289 INFO sqlalchemy.engine.base.Engine SELECT
system.users.id AS system_users_id, system. users.username AS
system_users_username, system.users.password AS system_users_password,
system.users.first_name AS system_users_first_name,
system.users.last_name AS system_users_last_name, system.users.
middle_name AS system_users_middle_name, system.users.locality_id AS
system_users_locality_id, system.users.  office_id AS
system_users_office_id, system.users.email AS system_users_email,
system.users.create_date ASsystem_users_create_date,
system.users.last_login AS system_users_last_login,
system.users.expire AS  system_users_expire,
system.users.disabled AS system_users_disabled
FROM system.users
WHERE system.users.username = %(username_1)s

[sqlalchemy] Re: something wrong with relationship caching at _trunk_

2010-12-21 Thread sector119
Yep, it's should be: def print_groups(u)

Michael, do you need complete (with data) working (or working wrong:)
sample?

That code is working... But with existing data...

On 21 Грд, 16:39, Michael Bayer mike...@zzzcomputing.com wrote:
 I couldn't begin to know what the issue is with code fragments like this.  
 Though your print_groups() function here is wrong:

 def print_groups(user):
    for g in u.groups:
        print g.name

 On Dec 21, 2010, at 9:15 AM, sector119 wrote:







  Hello!

  I have a problem with my relationship caching with 0.7b1 (current
  trunk)

  When I perform query on `User` model with `username` param, than
  access some lazy and cached separaterly from main query relationship -
  `groups`.
  After that I exec the same query on `User` model with another
  `username` param and access `groups` relationship I got the same
  groups as with first query and no sql being executed to get those
  groups...

  The same code works correctly with 0.6.5.

  To reproduce:

  % sudo invoke-rc.d memcached restart
  [sudo] password for sector119:
  Restarting memcached: memcached.
  %

  # sacache.py

  from sqlalchemy import create_engine

  from eps.model import init_model
  from eps.model import meta
  from eps.model import caching_query as cache
  from eps.model import cache_user_relationships
  from eps.model import User
  from eps.model import SYSTEM_SCHEMA

  def get_user(username):
     user = meta.Session.query(User).\
                         options(cache_user_relationships).\
                         options(cache.FromCache('default',
  'by_username')).\
                         filter_by(username=username,
  disabled=False).first()

     meta.Session.connection().execute('SET search_path TO {0},
  {1}'.format(SYSTEM_SCHEMA,

  user.locality.schema))

     return user

  def print_groups(user):
     for g in u.groups:
         print g.name

  engine = create_engine('postgresql+psycopg2://
  eps:mypassw...@127.0.0.1:5432/eps', echo=True)
  init_model(engine)

  u = get_user('sector119')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  u = get_user('privat')
  print '1. %s groups:' % u.username
  print_groups(u)
  print '2. %s groups:' % u.username
  print_groups(u)

  Output:

  1. sector119 groups:
  wheel
  2. sector119 groups:
  wheel

  1. privat groups:
  wheel
  2. privat groups:
  wheel

  Echoing SQL:

  2010-12-21 15:50:38,273 INFO sqlalchemy.engine.base.Engine select
  version()
  2010-12-21 15:50:38,274 INFO sqlalchemy.engine.base.Engine {}
  2010-12-21 15:50:38,275 INFO sqlalchemy.engine.base.Engine select
  current_schema()
  2010-12-21 15:50:38,275 INFO sqlalchemy.engine.base.Engine {}
  2010-12-21 15:50:38,276 INFO sqlalchemy.engine.base.Engine BEGIN
  (implicit)

  # FIRST user

  2010-12-21 15:50:38,277 INFO sqlalchemy.engine.base.Engine SELECT
  system.users.id AS system_users_id, system. users.username AS
  system_users_username, system.users.password AS system_users_password,
  system.users.        first_name AS system_users_first_name,
  system.users.last_name AS system_users_last_name, system.users.
  middle_name AS system_users_middle_name, system.users.locality_id AS
  system_users_locality_id, system.users.  office_id AS
  system_users_office_id, system.users.email AS system_users_email,
  system.users.create_date AS    system_users_create_date,
  system.users.last_login AS system_users_last_login,
  system.users.expire AS          system_users_expire,
  system.users.disabled AS system_users_disabled
  FROM system.users
  WHERE system.users.username = %(username_1)s AND system.users.disabled
  = %(disabled_1)s
  LIMIT %(param_1)s OFFSET %(param_2)s
  2010-12-21 15:50:38,277 INFO sqlalchemy.engine.base.Engine {'param_1':
  1, 'disabled_1': False, 'username_1':  'sector119', 'param_2': 0}

  2010-12-21 15:50:38,283 INFO sqlalchemy.engine.base.Engine SELECT
  system.localities.id AS                     system_localities_id,
  system.localities.name AS system_localities_name,
  system.localities.type AS             system_localities_type,
  system.localities.schema AS system_localities_schema
  FROM system.localities
  WHERE system.localities.id = %(param_1)s
  2010-12-21 15:50:38,283 INFO sqlalchemy.engine.base.Engine {'param_1':
  1}

  2010-12-21 15:50:38,285 INFO sqlalchemy.engine.base.Engine SET
  search_path TO system,ternopil
  2010-12-21 15:50:38,285 INFO sqlalchemy.engine.base.Engine {}

  # FIRST user GROUPS

  2010-12-21 15:50:38,286 INFO sqlalchemy.engine.base.Engine SELECT
  system.groups.id AS system_groups_id,       system.groups.name AS
  system_groups_name
  FROM system.groups, system.users_groups
  WHERE %(param_1)s = system.users_groups.user_id AND system.groups.id =
  system.users_groups.group_id
  2010-12-21 15:50:38,286 INFO sqlalchemy.engine.base.Engine {'param_1':
  2}

  # NEXT user

  2010-12-21 15:50:38,289 INFO sqlalchemy.engine.base.Engine SELECT
  system.users.id

[sqlalchemy] How to invalidate RelationshipCache?

2010-07-05 Thread sector119
Hi All!

I have users_table mapped to User model.

###

mapper(
User,
users_table,
properties={
'locality': relation(Locality, uselist=False),
'office': relation(Office, uselist=False,
 
primaryjoin=users_table.c.office_id==offices_table.c.id,
 
foreign_keys=[users_table.c.office_id]),
'groups': relation(Group, lazy=True,
secondary=users_groups_table, backref='users'),
'roles': relation(Role, lazy=True,
secondary=users_roles_table, backref='users'),
'localities': relation(Locality, lazy=True,
secondary=users_localities_table, backref='users'),
}
)

# Caching options. A set of RelationshipCache options
# which can be applied to Query(), causing the lazy load
# of these attributes to be loaded from cache.
cache_user_relationships = cache.RelationshipCache('default', 'by_id',
User.locality).and_(
   cache.RelationshipCache('default', 'by_id',
User.office)).and_(
   cache.RelationshipCache('default', 'by_id',
User.groups)).and_(
   cache.RelationshipCache('default', 'by_id',
User.roles)).and_(
   cache.RelationshipCache('default', 'by_id',
User.localities))

###

When I try to invalidate query I make:

q = meta.Session.query(User).\
 options(cache_user_relationships).\
 options(cache.FromCache('default', 'by_username')).\
 filter_by(username=SOME_USERNAME, disabled=False)
q.invalidate()

But this invalidate only User model data, not data cached from
relations that use RelationshipCache :( How can I invalidate
relationship cache? If it's possible I want to invalidate only
_certain_  relation cache, for example `office`.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to invalidate RelationshipCache?

2010-07-05 Thread sector119
 cache_user_relationships = cache.RelationshipCache('default', 'by_id',
 User.locality).and_(
                            cache.RelationshipCache('default', 'by_id',
 User.office)).and_(
                            cache.RelationshipCache('default', 'by_id',
 User.groups)).and_(
                            cache.RelationshipCache('default', 'by_id',
 User.roles)).and_(
                            cache.RelationshipCache('default', 'by_id',
 User.localities))

Is it ok that I set the same `namespace` for RelationshipCache?

I get that I can invalidate relationship cache with:
q = meta.Session.query(User).\
  options(cache.FromCache('default', 'by_id')).\
  filter_by(username=username, disabled=False)
q.invalidate()

But it would invalidate all relationship caches? If I only need to
invalidate `office` relationship cache I have to set different cache
namesapces for all cached relations?

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to invalidate RelationshipCache?

2010-07-05 Thread sector119
Is it have sense to have something like RelationFromCache(region,
namespace, attribute) ?

To use it like:
meta.Session.query(User).\
 options(cache.RelationFromCache('default',
'by_id', User.office)).\
 filter_by(username=username, disabled=False).\
 invalidate()

To invalidate certain relation.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to invalidate RelationshipCache?

2010-07-05 Thread sector119
I was wrong. This works, but it's not handy to recreate query manually
that was generated by by SA  (it's lazy relation) :(

Maybe it's possible to do that another way?

q = meta.Session.query(Office).\
   options(cache.FromCache('default',
'by_id')).\
   filter_by(id=197)
q.invalidate()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to invalidate RelationshipCache?

2010-07-05 Thread sector119
And one more question about caching, how can I invalidate all cached
queries for User model?

For one user I make:
q = meta.Session.query(User).\
 options(cache_user_relationships).\
 options(cache.FromCache('default', 'by_username')).\
 filter_by(username=SOME_USERNAME, disabled=False)
q.invalidate()

But how to reset them for all users?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] need help with array of tuples and composite custom type in postgresql

2010-01-12 Thread sector119
Hi All.

I have a problem with that query, how to rewrite it using sqlalchemy?

What to do with tuples - array_agg(row(meter_id,
organization_reading, reading)::meterreading) in psql I see it like
{(499680,137,141),(500765,258,267)} ?
And composite custom type - meterreading ?

How to create that type? And how to access array of tuples is it
supported with sqlalchemy multi-dimesional ARRAY type?

CREATE TYPE meterreading AS (
meter_id VARCHAR(255),
organization_reading INTEGER,
reading INTEGER);

SELECT t.*,
  m.readings
FROM transactions t
   LEFT OUTER JOIN
 (SELECT office_id, serial, commit_date, service_id,
array_agg(row(meter_id,
organization_reading, reading)::meterreading) AS readings
  FROM meter_readings
  GROUP BY office_id, serial, commit_date, service_id) m
  USING (office_id, serial, commit_date, service_id)
WHERE commit_date = '2010-01-11';

Thanks a lot!
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Something wrong with pool. Got Connection is already closed None None with copy_from and big file.

2009-11-21 Thread sector119


On Nov 4, 7:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 copy_from() probably creates some state that is not compatible with the
 connection being used afterwards for subsequent operations, or
 alternatively copy_from() is not compatible with some previous state.  
 The pool does nothing special to the connections which it stores except
 calling rollback() when they are returned.

 If you can try to isolate the issue to an exact sequence of events (i.e.,
 don't use a Session or ORM - just use an engine and connect()) that would
 reveal more about what's going on.

Now I try copy_from without Session or ORM, use engine only and
everything is ok :) What does it mean? :)

engine = create_engine(conf['sqlalchemy.url'])
connection = engine.raw_connection()
connection.cursor().cursor.copy_from(f, i.table, sep=chr(int(conf
['import.separator'])),  columns=map(str, i.fields.split(',')))



--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] ForeignKey and ForeignKeyConstraint copy() method make FK copy without arguments

2009-11-04 Thread sector119

Hi All

ForeignKey and ForeignKeyConstraint copy() method make FK copy without
arguments, for example deferrable, ...

class ForeignKey:
def copy(self, schema=None):
Produce a copy of this ForeignKey object.
return ForeignKey(self._get_colspec(schema=schema))

class ForeignKeyConstraint:
def copy(self, **kw):
return ForeignKeyConstraint(
[x.parent.name for x in self._elements.values()],
[x._get_colspec(**kw) for x in
self._elements.values()],
name=self.name,
onupdate=self.onupdate,
ondelete=self.ondelete,
use_alter=self.use_alter
)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey and ForeignKeyConstraint copy() method make FK copy without arguments

2009-11-04 Thread sector119

--- schema.py   2009-11-04 13:27:37.124466356 +0200
+++ schema.py   2009-11-04 13:27:21.486350738 +0200
@@ -909,7 +909,17 @@
 def copy(self, schema=None):
 Produce a copy of this ForeignKey object.

-return ForeignKey(self._get_colspec(schema=schema))
+return ForeignKey(
+self._get_colspec(schema=schema),
+constraint=self.constraint,
+use_alter=self.use_alter,
+name=self.name,
+onupdate=self.onupdate,
+ondelete=self.ondelete,
+deferrable=self.deferrable,
+initially=self.initially,
+link_to_name=self.link_to_name
+)

 def _get_colspec(self, schema=None):
 if schema:
@@ -1494,7 +1504,10 @@
 name=self.name,
 onupdate=self.onupdate,
 ondelete=self.ondelete,
+deferrable=self.deferrable,
+initially=self.initially,
+link_to_name=self.link_to_name
 )

 class PrimaryKeyConstraint(ColumnCollectionConstraint):

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not expected generated update query values

2009-11-04 Thread sector119

Oh, I forgot about that, it wasn't that behaviour that I realy
axpect..
I modify sql/compiller.py a bit to show what I mean. If I specify some
bindparam'eters at value(...) I want _only_ that columns to be at
update SET or insert VALUES..

--- compiler.py.orig2009-11-02 18:00:17.548954070 +0200
+++ compiler.py 2009-11-02 18:01:11.682036402 +0200
@@ -793,9 +793,9 @@

 # if we have statement parameters - set defaults in the
 # compiled params
-if self.column_keys is None:
-parameters = {}
-else:
+parameters = {}
+
+if self.column_keys is not None and not stmt.parameters:
 parameters = dict((sql._column_as_key(key), required)
   for key in self.column_keys if key not
in bind_names)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey and ForeignKeyConstraint copy() method make FK copy without arguments

2009-11-04 Thread sector119

done.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Something wrong with pool. Got Connection is already closed None None with copy_from and big file.

2009-11-04 Thread sector119

I use sqlalchemy 0.6  and psycopg2 2.0.13.

When I try to copy from some little file it's ok, when from some
bigger one I got error and log posted below.
When I use pgbouncer and pass poolclass=pool.NullPool to create_engine
everything wor perfect with large file too.

My query is:

session.connection().connection.cursor().cursor.copy_from(f, table,
sep=chr(int(conf['import.separator'])), columns=map(str, fields.split
(',')))

Log:

2009-11-04 18:43:30 EET LOG:  statement: COPY people
(id,street_id,building,corpus,appartment,block,last_name,hd,residents,space,rooms,privilege,comment)
FROM stdin WITH DELIMITER AS E' '
2009-11-04 18:44:08 EET LOG:  could not receive data from client:
Connection reset by peer
2009-11-04 18:44:08 EET CONTEXT:  COPY people, line 6146
2009-11-04 18:44:08 EET STATEMENT:  COPY people
(id,street_id,building,corpus,appartment,block,last_name,hd,residents,space,rooms,privilege,comment)
FROM stdin WITH DELIMITER AS E''
2009-11-04 18:44:08 EET LOG:  incomplete message from client
2009-11-04 18:44:08 EET CONTEXT:  COPY people, line 6146
2009-11-04 18:44:08 EET STATEMENT:  COPY people
(id,street_id,building,corpus,appartment,block,last_name,hd,residents,space,rooms,privilege,comment)
FROM stdin WITH DELIMITER AS E''
2009-11-04 18:44:08 EET ERROR:  unexpected EOF on client connection
2009-11-04 18:44:08 EET CONTEXT:  COPY people, line 6146
2009-11-04 18:44:08 EET STATEMENT:  COPY people
(id,street_id,building,corpus,appartment,block,last_name,hd,residents,space,rooms,privilege,comment)
FROM stdin WITH DELIMITER AS E''
2009-11-04 18:44:08 EET LOG:  could not send data to client: Broken
pipe
2009-11-04 18:44:08 EET LOG:  could not receive data from client:
Connection reset by peer
2009-11-04 18:44:08 EET LOG:  unexpected EOF on client connection
2009-11-04 18:44:08 EET LOG:  disconnection: session time: 0:00:44.763
user=eps database=eps host=localhost port=44257

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: can't import ARRAY from dialects.postgresql

2009-10-24 Thread sector119

The latest trunk.

 from sqlalchemy.dialects.postgresql import ARRAY
Traceback (most recent call last):
  File console, line 1, in module
ImportError: cannot import name ARRAY

 from sqlalchemy.dialects.postgresql.base import ARRAY

 from sqlalchemy.dialects import postgresql

 'ARRAY' in dir(postgresql)
False

We have no ARRAY at sqlalchemy/dialects/postgresql/__init__.py __all__
list...

% grep ARRAY dialects/postgresql/__init__.py
%

The same for orm, sql, can't import tham from sqlalchemy.. Should make
from sqlalchemy.orm import mapper, relation, ... not just from
sqlalchemy import orm :/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] UnicodeDecodeError with pg8000 postgresql.conf client_encoding=utf8, engine encoding=utf-8

2009-10-24 Thread sector119

Hi All.

I've got UnicodeDecodeError: 'ascii' codec can't decode byte 0xa1 in
position 3: ordinal not in range(128)
Why it can happen? I use client_encoding=utf8 at postgresql.conf and
encoding=utf-8 at create_engine.

File 'string', line 2 in submit
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
Pylons-0.9.7-py2.6.egg/pylons/decorators/__init__.py', line 207 in
wrapper
  return func(self, *args, **kwargs)
File '/home/sector119/devel/eps_env/src/eps/eps/controllers/login.py',
line 39 in submit
  user = meta.Session.query(User).options(eagerload
('locality')).filter_by(username=username).first()
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/orm/query.py', line
1250 in first
  ret = list(self[0:1])
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/orm/query.py', line
1171 in __getitem__
  return list(res)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/orm/query.py', line
1311 in __iter__
  return self._execute_and_instances(context)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/orm/query.py', line
1314 in _execute_and_instances
  result = self.session.execute(querycontext.statement,
params=self._params, mapper=self._mapper_zero_or_none())
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/orm/session.py', line
739 in execute
  clause, params or {})
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/engine/base.py', line
975 in execute
  return Connection.executors[c](self, object, multiparams, params)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/engine/base.py', line
1037 in _execute_clauseelement
  return self.__execute_context(context)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/engine/base.py', line
1060 in __execute_context
  self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/engine/base.py', line
1120 in _cursor_execute
  self.dialect.do_execute(cursor, statement, parameters,
context=context)
File '/home/sector119/devel/eps_env/lib/python2.6/site-packages/
SQLAlchemy-0.6beta1dev_r0-py2.6.egg/sqlalchemy/engine/default.py',
line 181 in do_execute
  cursor.execute(statement, parameters)
File 'build/bdist.linux-i686/egg/pg8000/dbapi.py', line 243 in _fn
File 'build/bdist.linux-i686/egg/pg8000/dbapi.py', line 312 in execute
File 'build/bdist.linux-i686/egg/pg8000/dbapi.py', line 317 in
_execute
File 'build/bdist.linux-i686/egg/pg8000/interface.py', line 303 in
execute
File 'build/bdist.linux-i686/egg/pg8000/interface.py', line 108 in
__init__
File 'build/bdist.linux-i686/egg/pg8000/protocol.py', line 918 in _fn
File 'build/bdist.linux-i686/egg/pg8000/protocol.py', line 1069 in
parse
File 'build/bdist.linux-i686/egg/pg8000/protocol.py', line 975 in
_send
File 'build/bdist.linux-i686/egg/pg8000/protocol.py', line 121 in
serialize
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa1 in position
3: ordinal not in range(128)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not expected generated update query values

2009-10-23 Thread sector119

Thanks a lot, Michael!

On Oct 23, 4:09 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Oct 22, 2009, at 3:26 PM, sector119 wrote:



  Something strange, Michael.. All bindparams are different. Compliller
  should not add to SET all params if values() has bindparam args, no?
  where() have:
     bindparam('commit_date'), bindparam('serial'), bindparam
  ('office_id')
  values() have:
     bindparam('rollback_date'), bindparam('rollback_time'), bindparam
  ('rollback_user_id')

 OK i have this fully repaired in trunk r6428.  if you use a bindparam
 () with a column name, it will be honored as is and won't be  
 implicitly added to the SET or VALUES clause of an update or insert.



  All bindparams are different.

  params={'commit_date':'2009-10-22', 'serial':1, 'office_id':1,
  'rollback_date':'2009-10-22', 'rollback_time':'11:12:15',
  'rollback_user_id':1, 'foobar':1, 'sum':111})

  UPDATE transactions SET serial=%(serial)s, office_id=%(office_id)s,
  sum=%(sum)
  s, commit_date=%(commit_date)s, rollback_date=%(rollback_date)s,
  rollback_time=%(rollback_time)s, rollback_user_id=%(rollback_user_id)
  s
  WHERE transactions.commit_date = %(commit_date)s AND
  transactions.serial = %(serial)s AND transactions.office_id = %
  (office_id)s

  On 22 окт, 19:45, Michael Bayer mike...@zzzcomputing.com wrote:
  sector119 wrote:

  though likely cleaner to pass the exact set of parameters desired.

  How to pass that params if I use bindparam at where() and values(),
  but I don't want to update colums that are at where() clause, only  
  at
  values() ?

  if you are using bindparam() objects, you'd given them all distinct  
  names.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] own compiler visit_update method

2009-10-23 Thread sector119

Hi!

Is it possible to easily add tables to FROM clause, now I use
following code to append additional table names but it look like a
hack..

at PGCompiler.visit_update() I see self.stack.append({'from': set
([update_stmt.table])}) where and how it's used, may be using this I
can append some elements to the FROM ?

def visit_update(self, update_stmt):
  text = super(PGCompiler, self).visit_update(update_stmt)
  if 'postgresql_from' in update_stmt.kwargs:
text = self._append_from(text, update_stmt)
  return text

def _append_from(self, text, stmt):
  return text.replace(' WHERE', ' FROM ' + string.join([table.name for
table in stmt.kwargs['postgresql_from']], ', ') + ' WHERE')


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] can't import ARRAY from dialects.postgresql

2009-10-23 Thread sector119

Is there something wrong with ARRAY type in SA 0.6? no dialects/
dialect/__init__.py has ARRAY at __all__ !?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: can't import ARRAY from dialects.postgresql

2009-10-23 Thread sector119

It's on trunk.
Elso can't import orm from sqlalchemy :/

On Oct 23, 6:00 pm, sector119 sector...@gmail.com wrote:
 Is there something wrong with ARRAY type in SA 0.6? no dialects/
 dialect/__init__.py has ARRAY at __all__ !?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] not expected generated update query values

2009-10-22 Thread sector119

Hi All!

Why I get at SET part not only items from values(...), but all from
params passed to session.execute?

 transactions_update = 
 model.transactions_table.update().where(and_(model.transactions_table.c.commit_date==bindparam('commit_date'),
  model.transactions_table.c.serial==bindparam('serial'), 
 model.transactions_table.c.office_id==bindparam('office_id'))).values(rollback_date=bindparam('rollback_date'),
  rollback_time=bindparam('rollback_time'), 
 rollback_user_id=bindparam('rollback_user_id'))

 meta.Session.execute(transactions_update, 
 params={'commit_date':'2009-10-22', 'serial':1, 'office_id':1, 
 'rollback_date':'2009-10-22', 'rollback_time':'11:12:15', 
 'rollback_user_id':1, 'foobar':1, 'sum':111})

17:31:50,761 INFO  [sqlalchemy.engine.base.Engine.0x...f34c] UPDATE
transactions SET serial=%(serial)s, office_id=%(office_id)s, sum=%(sum)
s, commit_date=%(commit_date)s, rollback_date=%(rollback_date)s,
rollback_time=%(rollback_time)s, rollback_user_id=%(rollback_user_id)s
WHERE transactions.commit_date = %(commit_date)s AND
transactions.serial = %(serial)s AND transactions.office_id = %
(office_id)s

17:31:50,761 INFO  [sqlalchemy.engine.base.Engine.0x...f34c]
{'rollback_date': '2009-10-22', 'sum': 111, 'commit_date':
'2009-10-22', 'office_id': 1, 'rollback_time': '11:12:15',
'rollback_user_id': 1, 'serial': 1}
sqlalchemy.engine.base.ResultProxy object at 0xac38c0c

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not expected generated update query values

2009-10-22 Thread sector119

 though likely cleaner to pass the exact set of parameters desired.

How to pass that params if I use bindparam at where() and values(),
but I don't want to update colums that are at where() clause, only at
values() ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not expected generated update query values

2009-10-22 Thread sector119

Something strange, Michael.. All bindparams are different. Compliller
should not add to SET all params if values() has bindparam args, no?
where() have:
bindparam('commit_date'), bindparam('serial'), bindparam
('office_id')
values() have:
bindparam('rollback_date'), bindparam('rollback_time'), bindparam
('rollback_user_id')

All bindparams are different.

params={'commit_date':'2009-10-22', 'serial':1, 'office_id':1,
'rollback_date':'2009-10-22', 'rollback_time':'11:12:15',
'rollback_user_id':1, 'foobar':1, 'sum':111})

UPDATE transactions SET serial=%(serial)s, office_id=%(office_id)s,
sum=%(sum)
s, commit_date=%(commit_date)s, rollback_date=%(rollback_date)s,
rollback_time=%(rollback_time)s, rollback_user_id=%(rollback_user_id)
s
WHERE transactions.commit_date = %(commit_date)s AND
transactions.serial = %(serial)s AND transactions.office_id = %
(office_id)s

On 22 окт, 19:45, Michael Bayer mike...@zzzcomputing.com wrote:
 sector119 wrote:

  though likely cleaner to pass the exact set of parameters desired.

  How to pass that params if I use bindparam at where() and values(),
  but I don't want to update colums that are at where() clause, only at
  values() ?

 if you are using bindparam() objects, you'd given them all distinct names.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] unbound method execute() must be called with Session instance as first argument (got Select instance instead)

2009-08-11 Thread sector119

Hi all!

Why I get unbound method execute() must be called with Session
instance as first argument (got Select instance instead) with
following code?

from twisted.internet import reactor, task, threads
from twisted.application import service
from twisted.python import log

from sqlalchemy import orm, create_engine
from sqlalchemy.sql import select

from eps.model import offices_table

url = 'postgres://test:t...@127.0.0.1:5432/eps'

def create_session():
return orm.sessionmaker(bind=create_engine(url),
expire_on_commit=False)

def require_session(f):
def wrapper(*args, **kwargs):
s = create_session()
try:
return f(session=s, *args, **kwargs)
except Exception, e:
log.err(e)
s.rollback()
raise
finally:
s.close()
return wrapper

@require_session
def _getTimers(session=None):
return session.execute(select
([offices_table.c.dayopen_time]).distinct().
order_by
(offices_table.c.dayopen_time)).fetchall()

def getTimers():
return threads.deferToThread(_getTimers)

log.msg('Timers: %r' % getTimers())

application = service.Application('Timmer')
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to set default join onclause if table has more than one FK to another table?

2009-03-11 Thread sector119

Hi All!

How to set default join condition if table has more than one FK to
another table?

It may be Transaction.user_id==User.id or
Transaction.rollback_user_id==User.id, I want first one to be default,
is it possible to do?

transactions_table = sa.Table('transactions', meta.metadata,
sa.Column('id', sa.types.Integer, primary_key=True),
sa.Column('serial', sa.types.Integer, index=True, nullable=False),
sa.Column('person_id', sa.types.Integer, sa.ForeignKey
('people.id'), index=True, nullable=False),
sa.Column('user_id', sa.types.Integer, sa.ForeignKey(SYSTEM_SCHEMA
+'.users.id'), index=True, nullable=False),
sa.Column('service_id', sa.types.Integer, sa.ForeignKey
('services.id'), index=True, nullable=False),
sa.Column('sum', sa.types.Integer, nullable=False),
sa.Column('commit_date', sa.types.Date, index=True,
nullable=False),
sa.Column('commit_time', sa.types.Time, index=True,
nullable=False),
sa.Column('rollback_date', sa.types.Date, index=True,
nullable=True),
sa.Column('rollback_time', sa.types.Time, index=True,
nullable=True),
sa.Column('rollback_user_id', sa.types.Integer, sa.ForeignKey
(SYSTEM_SCHEMA+'.users.id'), index=True, nullable=True)
)

Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: miss for postgres_from in update() :(

2009-03-05 Thread sector119

I create a small (4 lines) patch for databases/postgres.py

def _append_from(self, text, stmt):
return text.replace(' WHERE', ' FROM ' + string.join([table.name
for table in stmt.kwargs['postgres_from']], ', ') + ' WHERE')

def visit_update(self, update_stmt):
  text = super(PGCompiler, self).visit_update(update_stmt)
  if 'postgres_returning' in update_stmt.kwargs:
text = self._append_returning(text, update_stmt)
  if 'postgres_from' in update_stmt.kwargs:
text = self._append_from(text, update_stmt)
  return text

But I think that text.replace(...) is ugly, may be you know how to
insert FROM clause before WHERE without replacements?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] miss for postgres_from in update() :(

2009-03-04 Thread sector119

Hi all!

u = transactions_table.update().\
where(and_(transactions_table.c.commit_date==current_date, \
   transactions_table.c.serial==serial, \
   transactions_table.c.user_id==users_table.c.id, \
   users_table.c.office_id==id)).\
values(rollback_date=current_date, rollback_time=current_time)

When I perform this query I get (ProgrammingError) missing FROM-clause
entry for table users.

I use PostgreSQL and I can't find any postgres_from keyword argument
in update() is it possible to add it, or where I have to look to try
to implement it, or how to perform this query with sqlalchemy?

Thanks a lot!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] sequence with predicate, locks and select for update

2009-02-24 Thread sector119

Hi All!

I use postgresql and I have to emulate partial sequence (sequence with
predicate).

I have transactions table with serial_id field - this field have to
be autoincrement field and unique for commit_date and
user_office_id.

When I perform an insert to this table I have to do something like
that in SQL:
BEGIN;

LOCK transactions IN ROW SHARE MODE;

SELECT serial_id FROM transactions WHERE  FOR UPDATE;

MAX_SERIAL_ID = SELECT MAX(serial_id) FROM transactions WHERE
commit_date=CURRENT_TIMESTAMP AND user_office_id=93;

INSERT INTO transactions VALUES (MAX_SERIAL_ID+1, 93, 'other data1'),
(MAX_SERIAL_ID+1, 93, 'other data2'), (MAX_SERIAL_ID+1, 93, 'other
data3');

COMMIT;

--OR--

last_serial = execute(select([func.max(transactions.c.serial)],
transactions.c.user_office_id==93,
transactions.c.commit_date==datetime.date.today())).fetchone()[0]
AND then execute(transactions_table.insert(), [{93, last_serial+1,
'other', 'data'}, ...])
How to do that to not allow other insert's to be executed between my
select and insert statements from other requests and I don't want to
block select statements to transactions table!? Is there any magic
in SA to do that or may be I choose wrong way to do that kind of
stuff? Help me please I'm stucked with that :(

Thanks a lot!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sequence with predicate, locks and select for update

2009-02-24 Thread sector119

In SQL I hate that second SELECT, I can't perform SELECT MAX
(serial_id) FROM transactions WHERE bla,bla,bla FOR UPDATE;
because I get ERROR:  SELECT FOR UPDATE/SHARE is not allowed with
aggregate functions...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: does tometadata have to change foreign key schema name if it already specified?

2009-01-21 Thread sector119

Thank you, Michael, it was very helpful!

Michael Bayer написав:
 this can be changed but the function still makes no sense.   What if
 you also said t3 = users.tometadata(metadata,
 schema='SOME_SCHEMA')  ?   then you would want the system.users.id
 FK to be changed.the tometadata() approach doesn't provide an API
 that can take the use case of variable schema names into account.

 Therefore, feel free to implement tometadata() yourself using
 column.copy() and constraint.copy():

 def copy_table(table, metadata, schema_map):
  args = []
  for c in table.columns:
  args.append(c.copy())
  for c in table.constraints:
  if isinstance(c, ForeignKeyConstraint):
  elem = list(c.elements)[0]
  schema = schema_map[elem.column.table.schema]
  else:
  schema=None
  args.append(c.copy(schema=schema))
  return Table(table.name, metadata,
 schema=schema_map[table.schema], *args)

 usage:

 m2 = MetaData()
 t2 = copy_table(t, m2, {None:'SOME_SCHEMA', 'system':'system'})

 will map tables with no schema to SOME_SCHEMA, tables with system to
 the system schema.


 On Jan 20, 2009, at 3:02 PM, sector119 wrote:

 
  Hi ALL!
 
  Does tometadata have to change foreign key schema name if it already
  specified?
  For example if I have column 'user_id' with 'system.users.id' FK -
  tometadata change 'system' schema to SOME_SCHEMA. Is it ok, or
  tometadata have to set schema to SOME_SCHEMA for street_id,
  locality_id columns _only_ ?
 
  transactions_t = Table('transactions', meta.metadata,
Column('id', Integer,
  primary_key=True, autoincrement=False),
Column('user_id', Integer,
  ForeignKey('system.users.id'), nullable=False),
Column('street_id', Integer,
  ForeignKey('streets.id'), nullable=False),
Column('locality_id', Integer,
  ForeignKey('locality.id'), nullable=False),
Column('sum', Integer,
  nullable=False)
  )
 
  transactions_t.tometadata(metadata, schema='SOME_SCHEMA').create()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] does tometadata have to change foreign key schema name if it already specified?

2009-01-20 Thread sector119

Hi ALL!

Does tometadata have to change foreign key schema name if it already
specified?
For example if I have column 'user_id' with 'system.users.id' FK -
tometadata change 'system' schema to SOME_SCHEMA. Is it ok, or
tometadata have to set schema to SOME_SCHEMA for street_id,
locality_id columns _only_ ?

transactions_t = Table('transactions', meta.metadata,
   Column('id', Integer,
primary_key=True, autoincrement=False),
   Column('user_id', Integer,
ForeignKey('system.users.id'), nullable=False),
   Column('street_id', Integer,
ForeignKey('streets.id'), nullable=False),
   Column('locality_id', Integer,
ForeignKey('locality.id'), nullable=False),
   Column('sum', Integer,
nullable=False)
 )

transactions_t.tometadata(metadata, schema='SOME_SCHEMA').create()

Thanks a lot!



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] conn.execute('CREATE SCHEMA %(s)s', {'s':s}) escape schema name,but it shouldn't, and raise ProgrammingError

2008-11-10 Thread sector119

Hi!

I use PostgreSQL and when I try to create schema I use following
command conn.execute('CREATE SCHEMA %(s)s', {'s':s})

I get raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) syntax error at or
near E'system'
LINE 1: CREATE SCHEMA E'system'

Why it try to escape schema name and how to make it not to do that? :)

Thanks a lot!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how to change table foreign key target schema?

2008-11-07 Thread sector119

Hi!

I have people_table:

people_table = sa.Table('people', meta.metadata,
sa.Column('id', sa.types.Integer, primary_key=True,
autoincrement=False),
sa.Column('street_id', sa.types.Integer,
sa.ForeignKey('streets.id'), nullable=False),
sa.Column('first_name', sa.types.Unicode(255), nullable=True),
sa.Column('last_name', sa.types.Unicode(255), nullable=False)
)

And I have to create that table at different than public schema.
If I do people_table.tometadata(meta.metadata,
schema='myschema').create() sqlalchemy create table but with street_id
= ForeignKey('streets.id') not ForeignKey('myschema.streets.id')
How to make FK's follow new table schema or how to change foreign key
target on the fly?
With people_table.foreign_keys[0].target_fullname =
'myschema.streets.id' it doesn't work ? I get AttributeError: can't
change attribute

Thanks!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---