Re: [sqlalchemy] Number of Open ConnectionPool connections

2016-03-19 Thread Mike Bayer


The most exact way to determine how many database connections a Python 
process is using is to use netstat.   If you need a quick number and 
don't care about internals, that's your number.


Next, if you're in the Python process and have specific pool / engine to 
look at, it can give you an estimate using the checkedin() and 
checkedout() accessors.


You can get a more exact number within process by using event handlers 
on the Pool itself, including tracking the "connect" event 
(http://docs.sqlalchemy.org/en/rel_1_0/core/events.html#sqlalchemy.events.PoolEvents.connect). 
 However, SQLA doesn't yet provide an event for close(), so that 
approach requires a monkeypatch of the close() method.If you want to 
go the in-process route, we use a tool called connmon 
https://bitbucket.org/zzzeek/connmon which does this patch in order to 
get a graphical view of all connection activity in real time; here's a 
screenshot: 
https://bitbucket.org/zzzeek/connmon/src/954df26404d5533cf6a969b673e87b66ffe02a85/screenshot.png?fileviewer=file-view-default.


But quick and easy, use netstat.





On 03/16/2016 05:23 PM, Dave Ekhaus wrote:

Hi All

 Is there a way to determine the exact number of open db connections
in SQLAlchemy's connection pool ?

Thanks
Dave

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: How to filter SQLAlchemy JSON column containing array of dicts by dict attribute

2016-03-19 Thread 'Boncheff' via sqlalchemy
Thanks for your replies - it is now working as expected 

On Tuesday, 15 March 2016 16:50:58 UTC, Boncheff wrote:
>
> We have a DB model that contains a 'foo' field which is JSONType
>
> Inside this we store the following data:
>
> foo =>[{"apps": ["test_app"]}, {"tags": ["test_tag1", "test_tag2"]}]
>
> My question is, how can I, using session.query, select all items that have 
> a *test_tag_1* as a tag inside the foo column?
>
> I tried* 
> session.query(MyModel).filter(MyModel.foo[0]['tags'].in_('test_tag1')).all()* 
> but this results in 
>
> *** NotImplementedError: Operator 'getitem' is not supported on this 
> expression
>
> Is what I am trying to achieve even possible?
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Fwd: Us congress hearing of maan alsaan Money laundry قضية الكونغجرس لغسيل الأموال للمليادير معن الصانع

2016-03-19 Thread basher as
YouTube videos of



 U.S. Congress money laundering hearing


of

Saudi Billionaire  " Maan  Al sanea"

 with *bank of America*


and  The  owner of Saad Hospital and  Schools

 in the Eastern Province in *Saudi Arabia*



and the Chairman of the Board of Directors of Awal Bank  in *Bahrain*


With Arabic Subtitles





*موقع اليوتيوب الذي عرض جلسة استماع الكونجرس الأمريكي *

* لمتابعة نشاطات غسل الأموال ونشاطات*



*السعودي معن عبدالواحد الصانع*



*مالك مستشفى  وشركة سعد  ومدارس سعد بالمنطقة الشرقية بالسعودية   ورئيس مجلس
ادارة بنك اوال البحريني*



*مترجم باللغة العربية*



http://www.youtube.com/watch?v=mIBNnQvhU8s

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: Outer joins?

2016-03-19 Thread Alex Hall
That would be the simplest. Having something so inefficient just bugs me. :)

I'm using MSSQL, so limit() works. Would yield_per() help here, or is
that for something different? Even if it didn't help local memory, but
just kept the load on the DB server down, that would be good.

On 3/16/16, Christopher Lee  wrote:
> It sounds like you should just fire it up with the outer joins and watch
> memory on the box.  If it gets too high, or crashes entirely, then you can
> look into different approaches.  For example, you could keep the outer
> joins, but paginate your query so that it is only pulling a subset of the
> rows from your main table (but fully joining against the secondary
> tables).  Just one caveat... if you are using MySQL, then LIMIT and OFFSET
> are not your friends; you'll want to find a different pagination mechanism.
>
> On Wed, Mar 16, 2016 at 10:29 AM, Jonathan Vanasco 
> wrote:
>
>> We all inherit less-than-ideal situations.
>>
>> If this is running once a day and isn't impacting performance or other
>> work, I wouldn't really worry about the huge join matrix.  It sounds like
>> the current solution is "good enough".  In a few weeks or months you'll
>> be
>> better acquainted with SqlAlchemy and Sql in general and can revisit.
>>
>> In terms of your 15minute script: When you can use subqueries, filters
>> and
>> `load_only` for certain columns, your backend will generate a smaller
>> matrix and there will be a less data transferred "over the wire".
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] OperationalError with SQLite with simple update query

2016-03-19 Thread Piotr Dobrogost
On Wednesday, March 16, 2016 at 4:41:38 PM UTC+1, Simon King wrote:
>
>
> Hmm, ok. In that case, does it work if you use "TextValue.id.in_(ids)" 
> rather than Node? I can't tell from your 
>

Yes, this works – see my last post in this thread.
 

> description if the "id" and "value" columns are both present on the 
> TextValue table, or if you actually need to join to the Node class.
>

Both TextValue and Node have "id" column and "value" column is only in 
TextValue table. To get TextValue object all tables for superclasses of 
TextValue have to be joined which is how joined table inheritance works.

Abbreviated models below:

class TextValue(Content):
id = Column(Integer, ForeignKey('contents.id'), primary_key=True)
value = Column(Unicode)


class Content(Node):
@classproperty
def __mapper_args__(cls):
return dict(polymorphic_identity=camel_case_to_name(cls.__name__))

id = Column(Integer, ForeignKey('nodes.id'), primary_key=True)
(...)


class Node(Base, ContainerMixin, PersistentACLMixin):
__table_args__ = (UniqueConstraint('parent_id', 'name'))

__mapper_args__ = dict(
polymorphic_on='type',
polymorphic_identity='node',
with_polymorphic='*',
)

id = Column(Integer(), primary_key=True)
type = Column(String(30), nullable=False)
parent_id = Column(ForeignKey('nodes.id'), index=True)
()

It does seem like a bug that SA is generating this SQL.
>

I would thought so but I don't know SA to be sure.


Regards,
Piotr 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] alchemy expressionconfusion in insert statement having jsonb field

2016-03-19 Thread Krishnakant



On Wednesday 16 March 2016 02:24 PM, Simon King wrote:

On 16 Mar 2016, at 06:45, Krishnakant  wrote:

Dear all,
I have a challenge which is confusing me.
I have a table called voucher with the following field.
(vid, vdate,Cr) where vid is integer, vdate  is date and Cr is jsonb in
postgresql.
Can some one tell me how do I write an sql expression insert query
involving all the 3 fields?
some thing like con.execute(voucher.insert(),...) I don't know how I do
this.
In the documentation there are 2 fields id and data and it seems id is
auto_increment so inserting with only one field which is only json is
easy, but here I have 3 fields involved in the insert.
So how do I do this?
Happy hacking.
Krishnakant.Krishnakant.

I’m not sure I understand the question. Here are the docs for INSERT 
expressions:

http://docs.sqlalchemy.org/en/rel_1_0/core/tutorial.html#executing-multiple-statements

you should be able to write something like this:

con.execute(voucher.insert(), vid=1, vdate=somedate, Cr=somedict)


Thanks Simon,
It works and I also got another problem worked out.

Happy hacking.
Krishnakant.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] returning a dynamic query from Column.default callable

2016-03-19 Thread Jonathan Beluch
Background: Using core we have tables defined in a few separate files.
Goal: To have column defaults be selectables which reference other tables 
while avoiding circular imports. To avoid circular imports I cannot always 
build the selects at import time, they have to be generated inside a 
function that takes table/col names similar to how FKs work.

It seems that a callable for Column.default cannot return an uncompiled 
statement. Two solutions I see:

1) Use the context provided to default callables to compile the dynamic 
select statement.
2) Implement something similar to FKs, using all the parent attach events 
to set .arg to a selectable on a subclass of ColumnDefault.

Thoughts?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] back_populates doesn't work properly on one-to-one relationships (uselist=False)

2016-03-19 Thread univerio
Consider these models:

class Foo(Base):
__tablename__ = 'foo'

id = Column(Integer, primary_key=True, autoincrement=True)
bar_id = Column(Integer, ForeignKey("bar.id"), unique=True)
bar = relationship(lambda: Bar, back_populates="foo")


class Bar(Base):
__tablename__ = "bar"

id = Column(Integer, primary_key=True, autoincrement=True)
foo = relationship(Foo, back_populates="bar", uselist=False)


When loading foo.bar, I would expect foo.bar.foo to be set to foo without 
an additional query to the database, but this doesn't appear to be the 
case. Test code:


engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(bind=engine)
session = Session(bind=engine)

session.add(Foo(bar=Bar()))
session.flush()
session.expunge_all()

f = session.query(Foo).first()
b = f.bar
print("there shouldn't be anymore SQL queries")
assert b.foo is f


This results in the following output:

[...]
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine INSERT INTO bar 
DEFAULT VALUES
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine ()
2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine INSERT INTO foo 
(bar_id) VALUES (?)
2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine (1,)
2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine SELECT foo.id AS 
foo_id, foo.bar_id AS foo_bar_id 
FROM foo
 LIMIT ? OFFSET ?
2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine (1, 0)
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT bar.id AS 
bar_id 
FROM bar 
WHERE bar.id = ?
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
there shouldn't be anymore SQL queries
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT foo.id AS 
foo_id, foo.bar_id AS foo_bar_id 
FROM foo 
WHERE ? = foo.bar_id
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
2016-03-17 18:04:46,862 INFO sqlalchemy.engine.base.Engine ROLLBACK


I understand that when uselist=True there might be other items in the list 
so you can't populate that list, but when uselist=False, especially when 
there's a unique constraint ensuring that there can only ever be one entry, 
you should be able to populate it. Am I missing something?


Jack

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] OperationalError with SQLite with simple update query

2016-03-19 Thread Piotr Dobrogost
On Wednesday, March 16, 2016 at 3:51:16 PM UTC+1, Simon King wrote:
>
> On Wed, Mar 16, 2016 at 1:43 PM, Piotr Dobrogost <
> p...@2016.groups.google.dobrogost.net > wrote:
>
>> Hi!
>>
>> When executing below code
>>
>> DBSession.query(TextValue).\
>> filter(Node.id.in_(ids)).\
>> update({TextValue.value: appstruct['text_value']},
>>  synchronize_session=False)
>>
>> I get this error:
>> OperationalError: (sqlite3.OperationalError) near "FROM": syntax error 
>> [SQL: u'UPDATE text_values SET value=? FROM nodes WHERE nodes.id IN (?, 
>> ?, ?, ?)'] [parameters: (u'zzz', u'1685', u'175', u'1688', u'180')]
>>
>>
>>  

> I'm not sure if sqlite supports multi-table updates. Do you know what sort 
> of SQL you are expecting to generate here?
>

I would expect "normal" UPDATE with WHERE clause. I'm not sure where does 
FROM come from here as the new value is given explicitly and not to be read 
from existing rows.
 

> (Note that your query appears at least to be missing a join condition 
> between the TextValue and Node classes)
>

TextValue is a subclass of Content which is a subclass of Node.
Content declares this:
@classproperty
def __mapper_args__(cls):
return dict(polymorphic_identity=camel_case_to_name(cls.__name__))

so TextValue and Node should be implicitly joined according to rules for 
joined table polymorphism in SA.
 

Regards,
Piotr

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] how can I search rows containing jsonb data on the basis of it's key>

2016-03-19 Thread Krishnakant

Hello,
I wish to search rows in my table on the basis of text of json keys.
My table has vouchercode, voucherdate, dramt, cramt.
Here dramt and cramt are both jsonb fields (postgresql 9.4).
dramt containes account and amount, same with cramt.
sample date.
vouchercode:1 ... dramt{"1":25,"2":25}
"1" and "2" are account codes.
there will be several such ros and I want to get only those rows where 
either dramt or cramt contains accountcode as 1.

Note that accountcode is a key not the value of jsonb data.
so my sudo code for where is where dramt.key = '1'.
How can I achieve this?
I can loop through all vouchers and do a comparison but that is not 
efficient and entirely defeating purpose of a select query.

Happy hacking.
Krishnakant.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how can I search rows containing jsonb data on the basis of it's key>

2016-03-19 Thread Simon King
On Fri, Mar 18, 2016 at 5:12 AM, Krishnakant  wrote:

>
>
> On Thursday 17 March 2016 03:46 PM, Simon King wrote:
>
> On Thu, Mar 17, 2016 at 7:19 AM, Krishnakant 
> wrote:
>
>> Hello,
>> I wish to search rows in my table on the basis of text of json keys.
>> My table has vouchercode, voucherdate, dramt, cramt.
>> Here dramt and cramt are both jsonb fields (postgresql 9.4).
>> dramt containes account and amount, same with cramt.
>> sample date.
>> vouchercode:1 ... dramt{"1":25,"2":25}
>> "1" and "2" are account codes.
>> there will be several such ros and I want to get only those rows where
>> either dramt or cramt contains accountcode as 1.
>> Note that accountcode is a key not the value of jsonb data.
>> so my sudo code for where is where dramt.key = '1'.
>> How can I achieve this?
>>
>>
> The first question would be "how do I write this in SQL?". I've only used
> JSONB very briefly so what I say might be wrong here, but this page
> describes the postgres JSONB operators:
>
> http://www.postgresql.org/docs/9.4/static/functions-json.html
>
> One way to write your query is to use the "@>" operator:
>
> select *
> from yourtable
> where dramt @> '{"accountcode": 1}'::jsonb
> or cramt @> '{"accountcode": 1}'::jsonb
>
> Another would be the ->> operator:
>
> select *
> from yourtable
> where dramt ->> 'accountcode' = '1'
> or cramt ->> 'accountcode' = '1'
>
> In SQLAlchemy, I think these would be expressed as:
>
> YourTable.dramt.contains({'accountcode': '1'})
>
> and
>
> YourTable.cramt['accountcode'].astext == '1'
>
> Hope that helps,
>
> Simon
>
> Thanks a million Simon this is wonderful.
> Now I have another issue related to this.
> Suppose I wish to sum up all the amounts for accountcode 1 if it appears
> in any Dramount dictionary of any voucher row.
> Will this be possible at all?
> dramount is in every row of voucher and I want to sum up for the total so
> that I see how much transactions have happened.
>

I think you should be able to use something like:

sa.func.sum(YourTable.dramt['amount'].cast(sa.Float))

ie.

result =
(session.query(sa.func.sum(YourTable.dramt['amount'].cast(sa.Float))
  .filter(YourTable.dramt['accountcode'].astext == '1')).scalar()

Simon

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Outer joins?

2016-03-19 Thread Jonathan Vanasco
We all inherit less-than-ideal situations.

If this is running once a day and isn't impacting performance or other 
work, I wouldn't really worry about the huge join matrix.  It sounds like 
the current solution is "good enough".  In a few weeks or months you'll be 
better acquainted with SqlAlchemy and Sql in general and can revisit.

In terms of your 15minute script: When you can use subqueries, filters and 
`load_only` for certain columns, your backend will generate a smaller 
matrix and there will be a less data transferred "over the wire".  

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Backref primary join condition stopped working in 0.9.4

2016-03-19 Thread Mike Bayer



On 03/16/2016 02:37 PM, Thorsten von Stein wrote:

For several years, I have been using a pattern for making a many-to-one
relationship from *cls* to *remoteCls* with a one-to-many backref with a
join condition cls.foreignKey == remoteCls.id, where
*cls* has a deletion flag _del which should exclude *cls* instances with
del != 0 from the backref collection.

Since the condition involving _del is only relevant in the one-to-many
direction, I defined separate primaryjoin conditions which included this
condition only for the backref.

br = backref(
 backref,
 collection_class=list,
 primaryjoin=and_(remoteCls.id==remote(getattr(cls, foreignKey)),
cls._del==0))

rel = relationship(
 remoteCls,
 remote_side=remoteCls.id,
 primaryjoin=getattr(cls, foreignKey)==remoteCls.id,
 backref=br)

I have used this pattern successfully for years until I recently
upgraded SqlAlchemy to the latest version and found that the join
condition on the backref seems to be ignored and queries include
instances that are flagged as deleted via the _del column. I tested
several intermediate SqlAlchemy version and found that the first one
which breaks the pattern is 0.9.4.

Subsequently I found that removing the primary join condition on the
backref and including the _del != 0 condition in the forward primary
join condition seems to restore the intended behavior, but now many
queries involving large collections are dramatically slowed to make this
solution unworkable.

I reviewed the desciptions of changes, but they did not clarify for me
why the pattern above does not work any more. Is there a flaw in my code
that I am missing?


There are no changes that should affect the behavior of relationship in 
this way.  If anything, I'd wonder if the "0" value here is actually a 
boolean and is interacting with some backend-specific typing behavior, 
but there's not enough detail here to know.


Below is a complete test of your concept which succeeds.  Please alter 
this test appropriately to illustrate your failure condition occurring, 
thanks!


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)
a_id = Column(ForeignKey('a.id'))
_del = Column("del", Integer, default=0)


def make_rel(cls, remoteCls, foreignKey, backref_name):
br = backref(
backref_name,
collection_class=list,
primaryjoin=and_(
remoteCls.id == remote(getattr(cls, foreignKey)),
cls._del == 0)
)

rel = relationship(
remoteCls,
remote_side=remoteCls.id,
primaryjoin=getattr(cls, foreignKey) == remoteCls.id,
backref=br)
return rel

B.a = make_rel(B, A, "a_id", "bs")

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)

b1, b2, b3 = B(), B(), B()
a1 = A(bs=[b1, b2, b3])

s.add(a1)
s.commit()

b2._del = 1
s.commit()

assert a1.bs == [b1, b3]








--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] OperationalError with SQLite with simple update query

2016-03-19 Thread Mike Bayer



On 03/16/2016 11:23 AM, Piotr Dobrogost wrote:


so TextValue and Node should be implicitly joined according to rules for
joined table polymorphism in SA.


this is not supported - please read the caveats at


http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html?highlight=query.update#sqlalchemy.orm.query.Query.update


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] back_populates doesn't work properly on one-to-one relationships (uselist=False)

2016-03-19 Thread Mike Bayer



On 03/17/2016 09:15 PM, univerio wrote:

Consider these models:

class Foo(Base):
 __tablename__ = 'foo'

 id = Column(Integer, primary_key=True, autoincrement=True)
 bar_id = Column(Integer, ForeignKey("bar.id"), unique=True)
 bar = relationship(lambda: Bar, back_populates="foo")


class Bar(Base):
 __tablename__ = "bar"

 id = Column(Integer, primary_key=True, autoincrement=True)
 foo = relationship(Foo, back_populates="bar", uselist=False)


When loading foo.bar, I would expect foo.bar.foo to be set to foo
without an additional query to the database, but this doesn't appear to
be the case. Test code:


engine = create_engine("sqlite://", echo=True)
Base.metadata.create_all(bind=engine)
session = Session(bind=engine)

session.add(Foo(bar=Bar()))
session.flush()
session.expunge_all()

f = session.query(Foo).first()
b = f.bar
print("there shouldn't be anymore SQL queries")
assert b.foo is f


backrefs only communicate with each other when you manually set or 
remove items from an attribute.  They don't coordinate during a load 
operation.   To suit the common use case of a collection load where the 
other many-to-one side should not emit SQL, this is easy because the 
many-to-one side knows it can do a simple lookup in the identity map. 
With the one-to-many/one side, this is not possible because we don't 
know the primary key of the single element that would be here.








This results in the following output:

[...]
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine BEGIN
(implicit)
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine INSERT
INTO bar DEFAULT VALUES
2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine ()
2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine INSERT
INTO foo (bar_id) VALUES (?)
2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine (1,)
2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine SELECT
foo.id AS foo_id, foo.bar_id AS foo_bar_id
FROM foo
  LIMIT ? OFFSET ?
2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine (1, 0)
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT
bar.id AS bar_id
FROM bar
WHERE bar.id = ?
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
there shouldn't be anymore SQL queries
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT
foo.id AS foo_id, foo.bar_id AS foo_bar_id
FROM foo
WHERE ? = foo.bar_id
2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
2016-03-17 18:04:46,862 INFO sqlalchemy.engine.base.Engine ROLLBACK


I understand that when uselist=True there might be other items in the
list so you can't populate that list, but when uselist=False, especially
when there's a unique constraint ensuring that there can only ever be
one entry, you should be able to populate it. Am I missing something?


Jack

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Backref primary join condition stopped working in 0.9.4

2016-03-19 Thread Thorsten von Stein
Mike,

Thank you very much for your quick reply and for creating the test case.
Upon closer inspection, I noticed that the errors in my system resulted
from a special case of a self-referential relationship as in the following
modification of your test case, in which additionally the post_update flag
is required. In that case, the assertion fails. However, with an additional
remote annotation to make the _del==0 condition unambiguous, the example
works again. Now the only question that remains is why the original version
was working fine up to 0.9.3.

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()








*class C(Base): __tablename__ = 'c' id = Column(Integer,
primary_key=True) parent_id = Column(ForeignKey('c.id
')) _del = Column("del", Integer, default=0)*

def make_rel(cls, remoteCls, foreignKey, backref_name):
 br = backref(
 backref_name,
 collection_class=list,
 primaryjoin=and_(
 remoteCls.id == remote(getattr(cls, foreignKey)),
 cls._del == 0* # works with: remote(cls._del) == 0*
 )
 )

 rel = relationship(
 remoteCls,
 remote_side=remoteCls.id,
 primaryjoin=getattr(cls, foreignKey) == remoteCls.id,
 *post_update = True*,
 backref=br)
 return rel

*C.parent = make_rel(C, C, "parent_id", "children")*

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)



*c1, c2, c3 = C(), C(), C()c0 = C(children=[c1, c2, c3])*

s.add(c0)
s.commit()


*c2._del = 1*s.commit()


*assert c0.children == [c1, c3]*



On Wed, Mar 16, 2016 at 12:42 PM, Mike Bayer 
wrote:

>
>
> On 03/16/2016 02:37 PM, Thorsten von Stein wrote:
>
>> For several years, I have been using a pattern for making a many-to-one
>> relationship from *cls* to *remoteCls* with a one-to-many backref with a
>> join condition cls.foreignKey == remoteCls.id, where
>> *cls* has a deletion flag _del which should exclude *cls* instances with
>>
>> del != 0 from the backref collection.
>>
>> Since the condition involving _del is only relevant in the one-to-many
>> direction, I defined separate primaryjoin conditions which included this
>> condition only for the backref.
>>
>> br = backref(
>>  backref,
>>  collection_class=list,
>>  primaryjoin=and_(remoteCls.id==remote(getattr(cls, foreignKey)),
>> cls._del==0))
>>
>> rel = relationship(
>>  remoteCls,
>>  remote_side=remoteCls.id,
>>  primaryjoin=getattr(cls, foreignKey)==remoteCls.id,
>>  backref=br)
>>
>> I have used this pattern successfully for years until I recently
>> upgraded SqlAlchemy to the latest version and found that the join
>> condition on the backref seems to be ignored and queries include
>> instances that are flagged as deleted via the _del column. I tested
>> several intermediate SqlAlchemy version and found that the first one
>> which breaks the pattern is 0.9.4.
>>
>> Subsequently I found that removing the primary join condition on the
>> backref and including the _del != 0 condition in the forward primary
>> join condition seems to restore the intended behavior, but now many
>> queries involving large collections are dramatically slowed to make this
>> solution unworkable.
>>
>> I reviewed the desciptions of changes, but they did not clarify for me
>> why the pattern above does not work any more. Is there a flaw in my code
>> that I am missing?
>>
>
> There are no changes that should affect the behavior of relationship in
> this way.  If anything, I'd wonder if the "0" value here is actually a
> boolean and is interacting with some backend-specific typing behavior, but
> there's not enough detail here to know.
>
> Below is a complete test of your concept which succeeds.  Please alter
> this test appropriately to illustrate your failure condition occurring,
> thanks!
>
> from sqlalchemy import *
> from sqlalchemy.orm import *
> from sqlalchemy.ext.declarative import declarative_base
>
> Base = declarative_base()
>
>
> class A(Base):
> __tablename__ = 'a'
> id = Column(Integer, primary_key=True)
>
>
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
> a_id = Column(ForeignKey('a.id'))
> _del = Column("del", Integer, default=0)
>
>
> def make_rel(cls, remoteCls, foreignKey, backref_name):
> br = backref(
> backref_name,
> collection_class=list,
> primaryjoin=and_(
> remoteCls.id == remote(getattr(cls, foreignKey)),
> cls._del == 0)
> )
>
> rel = relationship(
> remoteCls,
> remote_side=remoteCls.id,
> primaryjoin=getattr(cls, foreignKey) == remoteCls.id,
> backref=br)
> return rel
>
> B.a = make_rel(B, A, "a_id", "bs")
>
> e = create_engine("sqlite://", echo=True)
> Base.metadata.create_all(e)
>
> s = Session(e)
>
> b1, b2, b3 

Re: [sqlalchemy] Cannot generate sql for shared mode lock (`LOCK IN SHARE MODE`)

2016-03-19 Thread Mike Bayer

dialect specific:

from sqlalchemy.dialects import mysql
print SomeClass.__table__.select(SomeClass.id == 
1).with_for_update(read=True).compile(dialect=mysql.dialect())




SELECT some_table.id, some_table.name
FROM some_table
WHERE some_table.id = %s LOCK IN SHARE MODE



http://docs.sqlalchemy.org/en/rel_1_0/faq/sqlexpressions.html#how-do-i-render-sql-expressions-as-strings-possibly-with-bound-parameters-inlined


On 03/17/2016 09:27 AM, Stu Schwartz wrote:

Hi,

I'm not sure if I'm approaching this incorrectly or missed some
documentation but I cannot come up with any combination of SQLAlchemy
methods to generate LOCK IN SHARE MODE.

Here's a simple example:
|
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class SomeClass(Base):
 __tablename__ = 'some_table'
 id = Column(Integer, primary_key=True)
 name =  Column(String(50))
|

Works as expected:
|
 >>> print SomeClass.__table__.select(SomeClass.id == 1).with_for_update()
SELECT some_table.id, some_table.name
FROM some_table
WHERE some_table.id = :id_1 FOR UPDATE
|


Does not work as expected:
|
 >>> print SomeClass.__table__.select(SomeClass.id ==
1).with_for_update(read=True)
SELECT some_table.id, some_table.name
FROM some_table
WHERE some_table.id = :id_1 FOR UPDATE
|
I was expecting:
|
SELECT some_table.id, some_table.name
FROM some_table
WHERE some_table.id = :id_1 LOCK IN SHARE MODE
|


Am I missing something here?

Thanks in advance,

Stu

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] how can I search rows containing jsonb data on the basis of it's key>

2016-03-19 Thread Simon King
On Thu, Mar 17, 2016 at 7:19 AM, Krishnakant  wrote:

> Hello,
> I wish to search rows in my table on the basis of text of json keys.
> My table has vouchercode, voucherdate, dramt, cramt.
> Here dramt and cramt are both jsonb fields (postgresql 9.4).
> dramt containes account and amount, same with cramt.
> sample date.
> vouchercode:1 ... dramt{"1":25,"2":25}
> "1" and "2" are account codes.
> there will be several such ros and I want to get only those rows where
> either dramt or cramt contains accountcode as 1.
> Note that accountcode is a key not the value of jsonb data.
> so my sudo code for where is where dramt.key = '1'.
> How can I achieve this?
>
>
The first question would be "how do I write this in SQL?". I've only used
JSONB very briefly so what I say might be wrong here, but this page
describes the postgres JSONB operators:

http://www.postgresql.org/docs/9.4/static/functions-json.html

One way to write your query is to use the "@>" operator:

select *
from yourtable
where dramt @> '{"accountcode": 1}'::jsonb
or cramt @> '{"accountcode": 1}'::jsonb

Another would be the ->> operator:

select *
from yourtable
where dramt ->> 'accountcode' = '1'
or cramt ->> 'accountcode' = '1'

In SQLAlchemy, I think these would be expressed as:

YourTable.dramt.contains({'accountcode': '1'})

and

YourTable.cramt['accountcode'].astext == '1'

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] returning a dynamic query from Column.default callable

2016-03-19 Thread Mike Bayer



On 03/17/2016 04:47 PM, Jonathan Beluch wrote:

Background: Using core we have tables defined in a few separate files.
Goal: To have column defaults be selectables which reference other
tables while avoiding circular imports. To avoid circular imports I
cannot always build the selects at import time, they have to be
generated inside a function that takes table/col names similar to how
FKs work.

It seems that a callable for Column.default cannot return an uncompiled
statement. Two solutions I see:

1) Use the context provided to default callables to compile the dynamic
select statement.
2) Implement something similar to FKs, using all the parent attach
events to set .arg to a selectable on a subclass of ColumnDefault.

Thoughts?


A column default callable is expected to produce the value that's to be 
embedded into the INSERT values.   You can execute any SQL you'd like 
there, but that's after the INSERT statement's string form is already 
decided.


If the goal is that the default is a SQL clause to be embedded in the 
string form of the INSERT, then you use a fixed default that is 
represented by that SQL clause.


If you can't generate that SQL clause due to imports, there's various 
ways to defer the production of the Column but the simplest way, not 
necessarily the nicest, is to stick it in declare_first:


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class B(Base):
__tablename__ = 'b'
id = Column(Integer, primary_key=True)

@classmethod
def __declare_first__(cls):
cls.b = Column(Integer, default=select([A]).as_scalar())


class A(Base):
__tablename__ = 'a'
id = Column(Integer, primary_key=True)

e = create_engine("sqlite://", echo=True)
configure_mappers()
Base.metadata.create_all(e)

s = Session(e)
s.add(B())
s.commit()


__declare_first__ is actually just a hook for the before_configured 
event so that's actually using the events in any case.







--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] returning a dynamic query from Column.default callable

2016-03-19 Thread Mike Bayer
Just core, you can some_table.append_column(Column(... whatever...)) at 
any time.The only "event" I can think of are the DDL attachment events:


from sqlalchemy import *
from sqlalchemy import event

m = MetaData()


table_a = Table(
'a', m,
Column('id', Integer)
)


@event.listens_for(Table, "after_parent_attach")
def evt(target, parent):
if parent is m and target.name == 'b':
m.tables['a'].append_column(
Column('data', Integer, default=select([target]).as_scalar())
)

table_b = Table(
'b', m,
Column('id', Integer)
)

e = create_engine("sqlite://", echo=True)
m.create_all(e)

e.execute(table_a.insert())


I guess if you really want to get the column default to be something 
that becomes the SQL at the last minute, you could do something like this:


from sqlalchemy import *
from sqlalchemy.sql import ColumnElement
from sqlalchemy.ext.compiler import compiles


class MyThing(ColumnElement):
pass


@compiles(MyThing)
def _generate_my_thing(element, compiler, **kw):
return compiler.process(select([table_b]).as_scalar())

m = MetaData()


table_a = Table(
'a', m,
Column('id', Integer),
Column('data', Integer, default=MyThing())
)

table_b = Table(
'b', m,
Column('id', Integer)
)

e = create_engine("sqlite://", echo=True)
m.create_all(e)

e.execute(table_a.insert())





On 03/17/2016 05:25 PM, Jonathan Beluch wrote:

We're just using core, is there some equivalent?

On Thursday, March 17, 2016 at 3:19:23 PM UTC-6, Mike Bayer wrote:



On 03/17/2016 04:47 PM, Jonathan Beluch wrote:
 > Background: Using core we have tables defined in a few separate
files.
 > Goal: To have column defaults be selectables which reference other
 > tables while avoiding circular imports. To avoid circular imports I
 > cannot always build the selects at import time, they have to be
 > generated inside a function that takes table/col names similar to
how
 > FKs work.
 >
 > It seems that a callable for Column.default cannot return an
uncompiled
 > statement. Two solutions I see:
 >
 > 1) Use the context provided to default callables to compile the
dynamic
 > select statement.
 > 2) Implement something similar to FKs, using all the parent attach
 > events to set .arg to a selectable on a subclass of ColumnDefault.
 >
 > Thoughts?

A column default callable is expected to produce the value that's to be
embedded into the INSERT values.   You can execute any SQL you'd like
there, but that's after the INSERT statement's string form is already
decided.

If the goal is that the default is a SQL clause to be embedded in the
string form of the INSERT, then you use a fixed default that is
represented by that SQL clause.

If you can't generate that SQL clause due to imports, there's various
ways to defer the production of the Column but the simplest way, not
necessarily the nicest, is to stick it in declare_first:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class B(Base):
  __tablename__ = 'b'
  id = Column(Integer, primary_key=True)

  @classmethod
  def __declare_first__(cls):
  cls.b = Column(Integer, default=select([A]).as_scalar())


class A(Base):
  __tablename__ = 'a'
  id = Column(Integer, primary_key=True)

e = create_engine("sqlite://", echo=True)
configure_mappers()
Base.metadata.create_all(e)

s = Session(e)
s.add(B())
s.commit()


__declare_first__ is actually just a hook for the before_configured
event so that's actually using the events in any case.




 >
 > --
 > You received this message because you are subscribed to the Google
 > Groups "sqlalchemy" group.
 > To unsubscribe from this group and stop receiving emails from it,
send
 > an email to sqlalchemy+...@googlegroups.com 
 > .
 > To post to this group, send email to sqlal...@googlegroups.com

 > .
 > Visit this group at https://groups.google.com/group/sqlalchemy
.
 > For more options, visit https://groups.google.com/d/optout
.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this 

Re: [sqlalchemy] Modeling single FK to multiple tables

2016-03-19 Thread Simon King
> On 17 Mar 2016, at 19:11, Alex Hall  wrote:
> 
> Hello all,
> It seems like I can't go a day without running into some kind of wall.
> This one is a conceptual one regarding foreign keys. I have to somehow
> get the same FK column in table A pointing to IDs in tables B and C.
> 
> At one person's suggestion, I'm making classes for my tables, even
> though I'm using automap. This is to let me stop doing a ton of joins,
> making querying much easier... I hope! I'm defining all the foreign
> keys between my tables manually. For instance:
> 
> class item(base):
> __tablename__ = "item"
> itm_id = Column(Integer, primary_key=True)
> vendornum = Column(String, ForeignKey(VENDR.PVVNNO))
> 
> class vendorTable(base):
> __tablename__ = "VENDR"
> PVVNNO = Column(String, primary_key=True)
> 
> If I've understood correctly, I'll now be able to say
> item.vendornum.vendor_full_name
> to get the vendor's full name for any item.
> 
> Here's the problem. Items have attachments, and attached text,
> respectively held in attach and attach_text tables. Binding them to
> items is a table called assignment. Assignment is pretty
> straightforward, with an itm_id and an attachment id (att_id). The
> trouble is that this att_id occurs in both attach and attach_text. I
> can make att_id a foreign key to one table or the other, but I'm not
> sure how to make it go to both tables.
> 
> class assignmentTable(base):
> __tablename__ = "assignment"
> itm_id = Column(Integer, ForeignKey(item.itm_id))
> #the following column has to point to attach_text.att_id AS WELL
>  att_id = Column(Integer, ForeignKey(attachment.att_id))
> seq_num = Column(Integer)
> asn_primary = Column(Integer, nullable=True)
> 
> class attachmentTable(base):
> __tablename__ = "attachment"
> att_id = Column(Integer, primary_key=True)
> 
> class attachmentTextTable(base):
> __tablename__ = "attach_text"
> att_id = Column(Integer, primary_key=True)

This isn’t possible - a foreign key can only point at one other column. If rows 
in attachment and attach_text are always supposed to be in a 1-to-1 
relationship, you could consider one of them to be the “master” record, and 
make any other occurrences point to that. So for example, attachment.att_id 
could be the master id, and attach_text.att_id could be a foreign key pointing 
at attachment.att_id (perhaps with a unique constraint on it). The assignment 
table suggests that this is a many-to-many relationship (an attachment can 
belong to many items, and an item can have many attachments), so the 
relationships might be set up like this:

item.assignments (list of assignmentTable objects)
assignment.item (item object)
assignment.attachment (attachmentTable object)
attachmentTable.text (attachmentTextTable object)

Here’s a working example:

import sqlalchemy as sa
import sqlalchemy.orm as saorm
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Item(Base):
__tablename__ = 'item'
itm_id = sa.Column(sa.Integer, primary_key=True)
vendornum = sa.Column(sa.ForeignKey('vendr.pvvnno'))

vendor = saorm.relationship('Vendor')
assignments = saorm.relationship('Assignment', back_populates='item')

class Vendor(Base):
__tablename__ = 'vendr'
pvvnno = sa.Column(sa.String(16), primary_key=True)
vendor_full_name = sa.Column(sa.Text())

class Assignment(Base):
__tablename__ = 'assignment'
asn_primary = sa.Column(sa.Integer, primary_key=True)
itm_id = sa.Column(sa.ForeignKey(Item.itm_id), nullable=False)
att_id = sa.Column(sa.ForeignKey('attachment.att_id'), nullable=False)
seq_num = sa.Column(sa.Integer)

item = saorm.relationship('Item', back_populates='assignments')
attachment = saorm.relationship('Attachment', back_populates='assignment')

class Attachment(Base):
__tablename__ = 'attachment'
att_id = sa.Column(sa.Integer, primary_key=True)
att_data = sa.Column(sa.Text())

assignment = saorm.relationship('Assignment', back_populates='attachment')
text = saorm.relationship('AttachmentText',
  back_populates='attachment',
  uselist=False)

class AttachmentText(Base):
__tablename__ = 'attachmenttext'
att_id = sa.Column(sa.ForeignKey('attachment.att_id'), primary_key=True)
att_text = sa.Column(sa.Text())

attachment = saorm.relationship('Attachment', back_populates='text')


if __name__ == '__main__':
import sys
engine = sa.create_engine('sqlite://', echo='debug')
Base.metadata.create_all(bind=engine)
Session = saorm.sessionmaker(bind=engine)
session = Session()

item = Item(vendor=Vendor(pvvnno='Alex',
  vendor_full_name='Alex the Vendor'))
for i in range(5):
attachment = Attachment(att_data=str(i),
text=AttachmentText(att_text='text for %s' % i))
item.assignments.append(Assignment(attachment=attachment))

[sqlalchemy] OperationalError with SQLite with simple update query

2016-03-19 Thread Piotr Dobrogost
Hi!

When executing below code

DBSession.query(TextValue).\
filter(Node.id.in_(ids)).\
update({TextValue.value: appstruct['text_value']},
 synchronize_session=False)

I get this error:
OperationalError: (sqlite3.OperationalError) near "FROM": syntax error 
[SQL: u'UPDATE text_values SET value=? FROM nodes WHERE nodes.id IN (?, ?, 
?, ?)'] [parameters: (u'zzz', u'1685', u'175', u'1688', u'180')]

Does SA construct query which is not valid in SQLite?
How can I solve this problem?

Regards,
Piotr Dobrogost

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: how can I search rows containing jsonb data on the basis of it's key>

2016-03-19 Thread Jonathan Beluch

>
> so my sudo code for where is where dramt.key = '1'. 
>

.where(or_(table.c.dramt.has_key('1'), table.c.cramt.has_key('1')))

This only works for JSONB.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] OperationalError with SQLite with simple update query

2016-03-19 Thread Simon King
On Wed, Mar 16, 2016 at 3:23 PM, Piotr Dobrogost <
p...@2016.groups.google.dobrogost.net> wrote:

> On Wednesday, March 16, 2016 at 3:51:16 PM UTC+1, Simon King wrote:
>>
>> On Wed, Mar 16, 2016 at 1:43 PM, Piotr Dobrogost <
>> p...@2016.groups.google.dobrogost.net> wrote:
>>
>>> Hi!
>>>
>>> When executing below code
>>>
>>> DBSession.query(TextValue).\
>>> filter(Node.id.in_(ids)).\
>>> update({TextValue.value: appstruct['text_value']},
>>>  synchronize_session=False)
>>>
>>> I get this error:
>>> OperationalError: (sqlite3.OperationalError) near "FROM": syntax error
>>> [SQL: u'UPDATE text_values SET value=? FROM nodes WHERE nodes.id IN (?,
>>> ?, ?, ?)'] [parameters: (u'zzz', u'1685', u'175', u'1688', u'180')]
>>>
>>>
>>>
>
>> I'm not sure if sqlite supports multi-table updates. Do you know what
>> sort of SQL you are expecting to generate here?
>>
>
> I would expect "normal" UPDATE with WHERE clause. I'm not sure where does
> FROM come from here as the new value is given explicitly and not to be read
> from existing rows.
>
>
>> (Note that your query appears at least to be missing a join condition
>> between the TextValue and Node classes)
>>
>
> TextValue is a subclass of Content which is a subclass of Node.
> Content declares this:
> @classproperty
> def __mapper_args__(cls):
> return dict(polymorphic_identity=camel_case_to_name(cls.__name__))
>
> so TextValue and Node should be implicitly joined according to rules for
> joined table polymorphism in SA.
>

Hmm, ok. In that case, does it work if you use "TextValue.id.in_(ids)"
rather than Node? I can't tell from your description if the "id" and
"value" columns are both present on the TextValue table, or if you actually
need to join to the Node class.

It does seem like a bug that SA is generating this SQL.

Simon

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Multiple many to one relationships to same table

2016-03-19 Thread 'Chris Norman' via sqlalchemy

Hi,
You're completely correct - it's a bit stupid if all I'm using is SQL.

Actually the rows in the table are just for storing things reliably, and 
all the magic happens with python classes which are as you describe.


Thank you for the help, I'll give it a go.

On 15/03/2016 20:40, Christopher Lee wrote:
A relationship usually looks at the foreign keys on the tables you 
specify and constructs the queries appropriately. The error you are 
getting happens because there are multiple foreign keys between the 
tables (in this case, the same table referencing itself... shudder...).


You need to tell each relationship which foreign key to use.

e.g.,:
contents = relationship(DbObject, foreign_keys=['location_id'], ...)

-

Technical problems aside, your database schema has some pretty serious 
flaws.  Having a single "objects" table that can relate to itself in 
all the ways in which things can relate to other things is a 
logistical nightmare, both in terms of performance and clarity.


For example, your schema looks suspiciously like people, places and 
things are all DBObjects.  A place has an id, and x, y, z 
coordinates.  A thing in that location would have a foreign key to the 
first record.  Well, what does it mean if a thing has a location_id to 
one set of coordinates, but has another set of coordinates in its x, 
y, and z values? Similarly, it looks like things have an owner_id that 
references another object, but what would it mean for a place to have 
an owner, or a person?  How can you tell what type of thing something 
is?  How would you query for all the people, or all the things?


A more sane schema might be something like:

class Person(base):
person_id = Column(Integer, primary_key=True)

class Place(base):
place_id = Column(Integer, primary_key=True)
x = Column(Float)
y = Column(Float)
z = Column(Float)

class Thing(base):
thing_id = Column(Integer, primary_key=True)
location_id = Column(Integer, ForeignKey(Place.place_id))
owner_id = Column(Integer, ForeignKey(Person.person_id))

etc.



On Tue, Mar 15, 2016 at 2:12 AM, 'Chris Norman' via sqlalchemy 
> wrote:


Hi,

On 14/03/2016 15:19, Mike Bayer wrote:



On 03/14/2016 11:15 AM, 'Chris Norman' via sqlalchemy wrote:

Hi all,
I've tried googling for this, and I get nothing. I have a
table to store
data about objects. Each object should have a location
property which
links back to the same table. Conversely, each object
should have a
contents property which shows all objects which have their
location set
to this object.


documentation for multiple relationships to the same table is at:


http://docs.sqlalchemy.org/en/rel_1_0/orm/join_conditions.html#handling-multiple-join-paths


hope this helps


Thanks for the docs, however I've read this page loads over the
past few days - since my google searches keep turning it up - and
it seemed like the right place to be.

I can't find anything in there that helps me. This isn't to say
it's not there, but my knowledge of SQL is fairly limited, and my
understanding of things like the relationship function aren't very
in depth.

Which bits should I specifically read to help?

Sorry if my questions seem a little stupid - as I say, databases
are something I use rather than understand. Recently I've been
using Django where everything is handled for me, so I'm still
struggling to come to grips with relationships and how they're
made up in the real (Django-free) world.

Thanks again.



-- 
You received this message because you are subscribed to the Google

Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to a topic in the 
Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/sqlalchemy/oXokasM5yI0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
sqlalchemy+unsubscr...@googlegroups.com 
.
To post to this group, send email to sqlalchemy@googlegroups.com 
.

Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 

[sqlalchemy] Re: Defining relationships (was: joins instead of filters remove attributes of results)

2016-03-19 Thread Jonathan Vanasco
GENERALLY SPEAKING

if you have relationships declared on the ORM and just want to iterate on 
them, then you don't have to join anything.  

The relationships will just iterate based on the join conditions on the orm.

if you end up joining or filtering , you should read up on contains_eager

http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html?highlight=contains_eager#relationship-loader-api




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Backref primary join condition stopped working in 0.9.4

2016-03-19 Thread Mike Bayer



On 03/16/2016 06:10 PM, Thorsten von Stein wrote:

Mike,

Thank you very much for your quick reply and for creating the test case.
Upon closer inspection, I noticed that the errors in my system resulted
from a special case of a self-referential relationship as in the
following modification of your test case, in which additionally the
post_update flag is required. In that case, the assertion fails.
However, with an additional remote annotation to make the _del==0
condition unambiguous, the example works again.


Right you'd need that remote() there for that mapping to be correct. The 
post_update part does not matter.



Now the only question

that remains is why the original version was working fine up to 0.9.3.


Looking at the 0.9.4 changelog this issue describes just what we're 
doing here:


http://docs.sqlalchemy.org/en/rel_1_0/changelog/changelog_09.html#change-c7ec7a2b899f6d33f4ef3c25d538daa0

and indeed the revision at 
https://bitbucket.org/zzzeek/sqlalchemy/commits/825d3b0d6db4 is where 
the test fails without the extra remote().   I don't really remember 
what we're doing here and it would take me a few hours of staring to 
re-understand this, it looks like the pattern you have is possibly the 
reverse of what's being fixed here but nonetheless was impacted to do 
"the right thing", which in your case was the thing you didn't want :).











from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()



*class C(Base):
  __tablename__ = 'c'
  id = Column(Integer, primary_key=True)
  parent_id = Column(ForeignKey('c.id '))
  _del = Column("del", Integer, default=0)
*

def make_rel(cls, remoteCls, foreignKey, backref_name):
  br = backref(
  backref_name,
  collection_class=list,
  primaryjoin=and_(
  remoteCls.id == remote(getattr(cls, foreignKey)),
  cls._del == 0*# works with: remote(cls._del) == 0*
  )
  )

  rel = relationship(
  remoteCls,
  remote_side=remoteCls.id,
  primaryjoin=getattr(cls, foreignKey) == remoteCls.id,
*post_update = True*,
  backref=br)
  return rel

*C.parent = make_rel(C, C, "parent_id", "children")*

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)

s = Session(e)


*c1, c2, c3 = C(), C(), C()
c0 = C(children=[c1, c2, c3])*

s.add(c0)
s.commit()

*c2._del = 1
*s.commit()


*assert c0.children == [c1, c3]*



On Wed, Mar 16, 2016 at 12:42 PM, Mike Bayer > wrote:



On 03/16/2016 02:37 PM, Thorsten von Stein wrote:

For several years, I have been using a pattern for making a
many-to-one
relationship from *cls* to *remoteCls* with a one-to-many
backref with a
join condition cls.foreignKey == remoteCls.id, where
*cls* has a deletion flag _del which should exclude *cls*
instances with

del != 0 from the backref collection.

Since the condition involving _del is only relevant in the
one-to-many
direction, I defined separate primaryjoin conditions which
included this
condition only for the backref.

br = backref(
  backref,
  collection_class=list,
  primaryjoin=and_(remoteCls.id==remote(getattr(cls,
foreignKey)),
cls._del==0))

rel = relationship(
  remoteCls,
  remote_side=remoteCls.id,
  primaryjoin=getattr(cls, foreignKey)==remoteCls.id,
  backref=br)

I have used this pattern successfully for years until I recently
upgraded SqlAlchemy to the latest version and found that the join
condition on the backref seems to be ignored and queries include
instances that are flagged as deleted via the _del column. I tested
several intermediate SqlAlchemy version and found that the first one
which breaks the pattern is 0.9.4.

Subsequently I found that removing the primary join condition on the
backref and including the _del != 0 condition in the forward primary
join condition seems to restore the intended behavior, but now many
queries involving large collections are dramatically slowed to
make this
solution unworkable.

I reviewed the desciptions of changes, but they did not clarify
for me
why the pattern above does not work any more. Is there a flaw in
my code
that I am missing?


There are no changes that should affect the behavior of relationship
in this way.  If anything, I'd wonder if the "0" value here is
actually a boolean and is interacting with some backend-specific
typing behavior, but there's not enough detail here to know.

Below is a complete test of your concept which succeeds.  Please
alter this 

[sqlalchemy] joins instead of filters remove attributes of results

2016-03-19 Thread Alex Hall
Hello all,
I'm running a different query than yesterday. Before, I had something like:

items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\
.filter(attachmentTable.itm_id == itemTable.itm_id)\
#and so on, a bunch of .filter calls

Then, in the loop iterating over the results, I could do this:

for result in queryResults:
 itemID = result.item.itm_id

Now that I'm using a bunch of outer left joins, that code is suddenly
not working. I get an error when I say
result.item.itm_id
AttributeError: 'item' object has no attribute 'item'

The problem is that my query starts out with only one table passed to
session.query(), not all of them. Thus my result is of type 'item',
which is the table passed in. That would be okay, except that I need
to access values of other tables in the result, so even if I change
id = result.item.itm_id
to
id = result.itm_id
When I then say
description = result.attach_text.att_value
AttributeError: 'item' object has no attribute 'attach_text'

I know why it doesn't. What I don't know is how to get my query
results to hold all the information from all the tables, or how to
access it if they do already, but in a different way than before. My
new query is this:

items = session.query(itemTable)\
.outerjoin(vendorTable, vendorTable.PVVNNO == itemTable.itm_vendornum)\
.outerjoin(assignmentTable, assignmentTable.itm_id == itemTable.itm_id)\
.filter(assignmentTable.att_id == attachmentTable.att_id)\
.outerjoin(attachmentTextTable, assignmentTable.att_id ==
attachmentTextTable.att_id)\
.outerjoin(attributeValueAssignmentTable,
attributeValueAssignmentTable.itm_id == itemTable.itm_id)\
.outerjoin(attributeTable, attributeTable.attr_id ==
attributeValueAssignmentTable.attr_id)\
.filter(attributeValueTable.attr_value_id ==
attributeValueAssignmentTable.attr_value_id)\
.yield_per(1000)

I've also tried the same query, but with the first line changed to:
items = session.query(itemTable, attachmentTable, attachmentTextTable,
assignmentTable, attributeTable, attributeValueTable,
attributeValueAssignmentTable, vendorTable)\

The problem here is that, while result.item.* works as expected, other
tables don't. For instance, result.attach_text.att_value yields an
AttributeError, 'None' type object has no attribute att_value.
Clearly, the other tables are in the result, but they're all None. I
expected something like that, and only added them back in to see if it
might help, but since I call query().outerjoin() I didn't think it
would work.

I should note that I renamed most of the tables by assigning variables
to base.classes.tableName, which is why I'm using "itemTable" here,
but in getting attributes of results I use just "item". The 'item'
table is called 'item', but I assigned it to a variable called
'itemTable', just for clarity in the script.

Is there a way to access the values of a query like this? At the very
least, is there a way I can print out all the objects the result
object has, so I can work out what to do? Thanks for any help!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] back_populates doesn't work properly on one-to-one relationships (uselist=False)

2016-03-19 Thread Jack Zhou
Ah, hadn't realized that they don't work on load at all. Is there any
reason why they couldn't be made to communicate on load as well? It seems a
little weird that the behavior is subtly different when an object is loaded
vs when it's constructed.

On Fri, Mar 18, 2016 at 6:50 AM Mike Bayer  wrote:

>
>
> On 03/17/2016 09:15 PM, univerio wrote:
> > Consider these models:
> >
> > class Foo(Base):
> >  __tablename__ = 'foo'
> >
> >  id = Column(Integer, primary_key=True, autoincrement=True)
> >  bar_id = Column(Integer, ForeignKey("bar.id"), unique=True)
> >  bar = relationship(lambda: Bar, back_populates="foo")
> >
> >
> > class Bar(Base):
> >  __tablename__ = "bar"
> >
> >  id = Column(Integer, primary_key=True, autoincrement=True)
> >  foo = relationship(Foo, back_populates="bar", uselist=False)
> >
> >
> > When loading foo.bar, I would expect foo.bar.foo to be set to foo
> > without an additional query to the database, but this doesn't appear to
> > be the case. Test code:
> >
> >
> > engine = create_engine("sqlite://", echo=True)
> > Base.metadata.create_all(bind=engine)
> > session = Session(bind=engine)
> >
> > session.add(Foo(bar=Bar()))
> > session.flush()
> > session.expunge_all()
> >
> > f = session.query(Foo).first()
> > b = f.bar
> > print("there shouldn't be anymore SQL queries")
> > assert b.foo is f
>
> backrefs only communicate with each other when you manually set or
> remove items from an attribute.  They don't coordinate during a load
> operation.   To suit the common use case of a collection load where the
> other many-to-one side should not emit SQL, this is easy because the
> many-to-one side knows it can do a simple lookup in the identity map.
> With the one-to-many/one side, this is not possible because we don't
> know the primary key of the single element that would be here.
>
>
>
>
> >
> >
> > This results in the following output:
> >
> > [...]
> > 2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine BEGIN
> > (implicit)
> > 2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine INSERT
> > INTO bar DEFAULT VALUES
> > 2016-03-17 18:04:46,858 INFO sqlalchemy.engine.base.Engine ()
> > 2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine INSERT
> > INTO foo (bar_id) VALUES (?)
> > 2016-03-17 18:04:46,859 INFO sqlalchemy.engine.base.Engine (1,)
> > 2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine SELECT
> > foo.id AS foo_id, foo.bar_id AS foo_bar_id
> > FROM foo
> >   LIMIT ? OFFSET ?
> > 2016-03-17 18:04:46,860 INFO sqlalchemy.engine.base.Engine (1, 0)
> > 2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT
> > bar.id AS bar_id
> > FROM bar
> > WHERE bar.id = ?
> > 2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
> > there shouldn't be anymore SQL queries
> > 2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine SELECT
> > foo.id AS foo_id, foo.bar_id AS foo_bar_id
> > FROM foo
> > WHERE ? = foo.bar_id
> > 2016-03-17 18:04:46,861 INFO sqlalchemy.engine.base.Engine (1,)
> > 2016-03-17 18:04:46,862 INFO sqlalchemy.engine.base.Engine ROLLBACK
> >
> >
> > I understand that when uselist=True there might be other items in the
> > list so you can't populate that list, but when uselist=False, especially
> > when there's a unique constraint ensuring that there can only ever be
> > one entry, you should be able to populate it. Am I missing something?
> >
> >
> > Jack
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to sqlalchemy+unsubscr...@googlegroups.com
> > .
> > To post to this group, send email to sqlalchemy@googlegroups.com
> > .
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/1z2padgvags/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to 

Re: [sqlalchemy] returning a dynamic query from Column.default callable

2016-03-19 Thread Jonathan Beluch
Awesome, both look good, will try them out. Thanks for the quick reply.

On Thursday, March 17, 2016 at 3:54:11 PM UTC-6, Mike Bayer wrote:
>
> Just core, you can some_table.append_column(Column(... whatever...)) at 
> any time.The only "event" I can think of are the DDL attachment 
> events: 
>
> from sqlalchemy import * 
> from sqlalchemy import event 
>
> m = MetaData() 
>
>
> table_a = Table( 
>  'a', m, 
>  Column('id', Integer) 
> ) 
>
>
> @event.listens_for(Table, "after_parent_attach") 
> def evt(target, parent): 
>  if parent is m and target.name == 'b': 
>  m.tables['a'].append_column( 
>  Column('data', Integer, default=select([target]).as_scalar()) 
>  ) 
>
> table_b = Table( 
>  'b', m, 
>  Column('id', Integer) 
> ) 
>
> e = create_engine("sqlite://", echo=True) 
> m.create_all(e) 
>
> e.execute(table_a.insert()) 
>
>
> I guess if you really want to get the column default to be something 
> that becomes the SQL at the last minute, you could do something like this: 
>
> from sqlalchemy import * 
> from sqlalchemy.sql import ColumnElement 
> from sqlalchemy.ext.compiler import compiles 
>
>
> class MyThing(ColumnElement): 
>  pass 
>
>
> @compiles(MyThing) 
> def _generate_my_thing(element, compiler, **kw): 
>  return compiler.process(select([table_b]).as_scalar()) 
>
> m = MetaData() 
>
>
> table_a = Table( 
>  'a', m, 
>  Column('id', Integer), 
>  Column('data', Integer, default=MyThing()) 
> ) 
>
> table_b = Table( 
>  'b', m, 
>  Column('id', Integer) 
> ) 
>
> e = create_engine("sqlite://", echo=True) 
> m.create_all(e) 
>
> e.execute(table_a.insert()) 
>
>
>
>
>
> On 03/17/2016 05:25 PM, Jonathan Beluch wrote: 
> > We're just using core, is there some equivalent? 
> > 
> > On Thursday, March 17, 2016 at 3:19:23 PM UTC-6, Mike Bayer wrote: 
> > 
> > 
> > 
> > On 03/17/2016 04:47 PM, Jonathan Beluch wrote: 
> >  > Background: Using core we have tables defined in a few separate 
> > files. 
> >  > Goal: To have column defaults be selectables which reference 
> other 
> >  > tables while avoiding circular imports. To avoid circular imports 
> I 
> >  > cannot always build the selects at import time, they have to be 
> >  > generated inside a function that takes table/col names similar to 
> > how 
> >  > FKs work. 
> >  > 
> >  > It seems that a callable for Column.default cannot return an 
> > uncompiled 
> >  > statement. Two solutions I see: 
> >  > 
> >  > 1) Use the context provided to default callables to compile the 
> > dynamic 
> >  > select statement. 
> >  > 2) Implement something similar to FKs, using all the parent 
> attach 
> >  > events to set .arg to a selectable on a subclass of 
> ColumnDefault. 
> >  > 
> >  > Thoughts? 
> > 
> > A column default callable is expected to produce the value that's to 
> be 
> > embedded into the INSERT values.   You can execute any SQL you'd 
> like 
> > there, but that's after the INSERT statement's string form is 
> already 
> > decided. 
> > 
> > If the goal is that the default is a SQL clause to be embedded in 
> the 
> > string form of the INSERT, then you use a fixed default that is 
> > represented by that SQL clause. 
> > 
> > If you can't generate that SQL clause due to imports, there's 
> various 
> > ways to defer the production of the Column but the simplest way, not 
> > necessarily the nicest, is to stick it in declare_first: 
> > 
> > from sqlalchemy import * 
> > from sqlalchemy.orm import * 
> > from sqlalchemy.ext.declarative import declarative_base 
> > 
> > Base = declarative_base() 
> > 
> > 
> > class B(Base): 
> >   __tablename__ = 'b' 
> >   id = Column(Integer, primary_key=True) 
> > 
> >   @classmethod 
> >   def __declare_first__(cls): 
> >   cls.b = Column(Integer, default=select([A]).as_scalar()) 
> > 
> > 
> > class A(Base): 
> >   __tablename__ = 'a' 
> >   id = Column(Integer, primary_key=True) 
> > 
> > e = create_engine("sqlite://", echo=True) 
> > configure_mappers() 
> > Base.metadata.create_all(e) 
> > 
> > s = Session(e) 
> > s.add(B()) 
> > s.commit() 
> > 
> > 
> > __declare_first__ is actually just a hook for the before_configured 
> > event so that's actually using the events in any case. 
> > 
> > 
> > 
> > 
> >  > 
> >  > -- 
> >  > You received this message because you are subscribed to the 
> Google 
> >  > Groups "sqlalchemy" group. 
> >  > To unsubscribe from this group and stop receiving emails from it, 
> > send 
> >  > an email to sqlalchemy+...@googlegroups.com  
> >  >  >. 
> >  > To post to this group, send email to sqlal...@googlegroups.com 
> >  
> >  > 

[sqlalchemy] Re: Database synchronisation

2016-03-19 Thread milan53064617

>
> Right now I am taking  test run object from local database, then I call 
> expunge on that object (cascade is set to expunge).
>
 then I put this object into this python method :
 

 

def _reset_primary_keys(self, test_run):
make_transient(test_run)
test_run.ID = None
if test_run.OrderNumber:
make_transient(test_run.OrderNumber)
test_run.OrderNumber.ID = None

for equipment in test_run.TestEquipments:
make_transient(equipment)
equipment.ID = None

for trd in test_run.TestRunToDevs:
make_transient(trd)
trd.ID = None

if trd.TestedDut:
make_transient(trd.TestedDut)
trd.TestedDut.ID = None

for test_step in trd.TestSteps:
make_transient(test_step)
test_step.ID = None

for test_result in test_step.TestResults:
make_transient(test_result)
test_result.ID = None


This method reset all primary keys, so I can merge it into master database 
which will generate new primary keys. Is there a better or easier way how to do 
it?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] returning a dynamic query from Column.default callable

2016-03-19 Thread Jonathan Beluch
We're just using core, is there some equivalent?

On Thursday, March 17, 2016 at 3:19:23 PM UTC-6, Mike Bayer wrote:
>
>
>
> On 03/17/2016 04:47 PM, Jonathan Beluch wrote: 
> > Background: Using core we have tables defined in a few separate files. 
> > Goal: To have column defaults be selectables which reference other 
> > tables while avoiding circular imports. To avoid circular imports I 
> > cannot always build the selects at import time, they have to be 
> > generated inside a function that takes table/col names similar to how 
> > FKs work. 
> > 
> > It seems that a callable for Column.default cannot return an uncompiled 
> > statement. Two solutions I see: 
> > 
> > 1) Use the context provided to default callables to compile the dynamic 
> > select statement. 
> > 2) Implement something similar to FKs, using all the parent attach 
> > events to set .arg to a selectable on a subclass of ColumnDefault. 
> > 
> > Thoughts? 
>
> A column default callable is expected to produce the value that's to be 
> embedded into the INSERT values.   You can execute any SQL you'd like 
> there, but that's after the INSERT statement's string form is already 
> decided. 
>
> If the goal is that the default is a SQL clause to be embedded in the 
> string form of the INSERT, then you use a fixed default that is 
> represented by that SQL clause. 
>
> If you can't generate that SQL clause due to imports, there's various 
> ways to defer the production of the Column but the simplest way, not 
> necessarily the nicest, is to stick it in declare_first: 
>
> from sqlalchemy import * 
> from sqlalchemy.orm import * 
> from sqlalchemy.ext.declarative import declarative_base 
>
> Base = declarative_base() 
>
>
> class B(Base): 
>  __tablename__ = 'b' 
>  id = Column(Integer, primary_key=True) 
>
>  @classmethod 
>  def __declare_first__(cls): 
>  cls.b = Column(Integer, default=select([A]).as_scalar()) 
>
>
> class A(Base): 
>  __tablename__ = 'a' 
>  id = Column(Integer, primary_key=True) 
>
> e = create_engine("sqlite://", echo=True) 
> configure_mappers() 
> Base.metadata.create_all(e) 
>
> s = Session(e) 
> s.add(B()) 
> s.commit() 
>
>
> __declare_first__ is actually just a hook for the before_configured 
> event so that's actually using the events in any case. 
>
>
>
>
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > . 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.