RE: [sqlalchemy] Tracking instance commit status

2016-08-11 Thread Gombas, Gabor
Well, it is somewhat similar to my use case: there is a part (kind of a 
library) of the application which may load an object or it may transparently 
create one if it did not exist before. And there is a completely different part 
of the application which wants to do so something only if the object was 
freshly created (because it would be too expensive to do unconditionally) - but 
it has no idea where the object is coming from. What I've done is adding an 
attribute which gets set to True in __init__() and False by the @reconstructor 
- a more complete implementation would probably want to reset the flag to False 
on objects which are alive when a transaction commits, but my use case does not 
need that complexity.

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Simon King
Sent: 11 August 2016 10:33
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Tracking instance commit status

On Thu, Aug 11, 2016 at 8:30 AM, Wibowo Arindrarto
 wrote:
> Dear all,
>
> I have a little problem with getting object states that have been flushed
> but not committed. As far as I understand, SQLAlchemy does not have an
> official object state corresponding to this. However, in my application I
> need to track such objects. My current approach is to listen to the
> ``after_flush`` session event, similar to the one outlined here. However,
> that gets messy quite quickly if I want to track different instances from
> different tables as the approach uses a global db.session.info state.
>
> For the record, I'm also open to suggestions / critiques on whether tracking
> flushed but not committed objects are a good idea. The reason I am doing
> this is because I need to do something like ``get_or_create`` (get a
> database record if present, create it otherwise). But I'm doing this
> multiple times, and some objects may be a container of multiple other
> objects. To make the ``get_or_create`` query of these container objects
> simpler, I store the hash of the object IDs that it contains. Doing so means
> that I need to have all the contained object IDs, which means I have to at
> least flush the contained objects first before I can run ``get_or_create``
> on the container object.
>
> Now, my question is, is it possible to extend my models such that I can
> track the commit status of it? If so, how do I do it? If not, is there a
> better way to tackle my underlying problem (i.e. doing ``get_or_create`` on
> a container object)? I am aware of the ``persistent`` state, but it does not
> make the distinction on the commit status of an object.
>

I don't really understand the question here - it's not clear to me why
get_or_create for the container objects is any different than
get_or_create for any other object, or what that has to do with
flushed-but-not-committed records. Why do you need to treat objects
that have been flushed but not committed any differently to committed
ones?

Simon

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers  If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
communicating with Morgan Stanley you consent to the foregoing and to the voice 
recording of conversations with personnel of Morgan Stanley.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


RE: [sqlalchemy] how to pick up constraint violation errors in Alchemy

2016-01-04 Thread Gombas, Gabor
If you're using psycopg2, then you may want to use the Diagnostics extension - 
check the documentation of psycopg2.

Gabor

> -Original Message-
> From: sqlalchemy@googlegroups.com
> [mailto:sqlalchemy@googlegroups.com] On Behalf Of Krishnakant
> Sent: 03 January 2016 08:16
> To: sqlalchemy@googlegroups.com
> Subject: Re: [sqlalchemy] how to pick up constraint violation errors in
> Alchemy
> 
> 
> 
> On Sunday 03 January 2016 04:20 AM, Mike Bayer wrote:
> >
> > On 01/02/2016 01:02 PM, Krishnakant wrote:
> >> hello all,
> >> I wish to know how I can pick up the constraint failure errors in my code?
> >> I think I would probably have to pick up the errors in a try: except:
> >> system?
> > that is correct.
> >
> >> But What is the exact way of picking up the message?
> > You'd need to catch the type of exception, such as IntegrityError
> > which is usually what you will get for a constraint violation.  Beyond
> > that, if you need to programmatically have more information you'd need
> > to parse the text of the exception with a regular expression.
> 
> So If the unique constraint is violated, then will I still have to see them 
> using
> integrity exception?
> 
> As I said I am using psycopg2 for postgresql.
> Thanks for the tips.
> happy hacking.
> Krishnakant.
> 
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies; do not disclose, use or act 
upon the information; and notify the sender immediately. Mistransmission is not 
intended to waive confidentiality or privilege. Morgan Stanley reserves the 
right, to the extent permitted under applicable law, to monitor electronic 
communications. This message is subject to terms available at the following 
link: http://www.morganstanley.com/disclaimers. If you cannot access these 
links, please notify us by reply message and we will send the contents to you. 
By messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


RE: [sqlalchemy] How to commit objects with circular FK references?

2015-12-12 Thread Gombas, Gabor
Alternatively, you could make the constraints deferrable, and set them to 
deferred mode before adding objects with circular dependencies.

> -Original Message-
> From: sqlalchemy@googlegroups.com
> [mailto:sqlalchemy@googlegroups.com] On Behalf Of Mike Bayer
> Sent: 12 December 2015 02:49
> To: sqlalchemy@googlegroups.com
> Subject: Re: [sqlalchemy] How to commit objects with circular FK references?
> 
> 
> 
> On 12/11/2015 07:47 PM, Gerald Thibault wrote:
> > Is there a way to perform a Metadata.create_all() but have it only
> > create the tables, without any of the FKs? And then create the FKs in
> > one go after the fixture data has been loaded into the DB?
> 
> first off, this is unnecessary because the DDL system can now create
> mutually-dependent foreign key constraints without any use of the
> use_alter flag; ALTER TABLE ADD CONSTRAINT is used automatically when a
> cycle is detected and this flag is legacy.
> 
> However, if you want to manually control this you can use the rules at
> http://docs.sqlalchemy.org/en/latest/core/ddl.html#controlling-ddl-
> sequences
> to achieve this effect.  You'd apply the AddConstraint object to all of the
> ForeignKeyConstraint objects and then set them up with events.  Here is an
> example:
> 
> 
> from sqlalchemy import Column, MetaData, Integer, create_engine,
> ForeignKey, ForeignKeyConstraint from sqlalchemy.ext.declarative import
> declarative_base from sqlalchemy import event
> 
> convention = {
>   "fk":
> "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
> }
> 
> metadata = MetaData(naming_convention=convention)
> 
> Base = declarative_base(metadata=metadata)
> 
> 
> class A(Base):
> __tablename__ = 'a'
> id = Column(Integer, primary_key=True)
> 
> 
> class B(Base):
> __tablename__ = 'b'
> id = Column(Integer, primary_key=True)
> a_id = Column(ForeignKey('a.id'))
> 
> from sqlalchemy.schema import AddConstraint, DropConstraint
> 
> for table in Base.metadata.tables.values():
> for constraint in table.constraints:
> if isinstance(constraint, ForeignKeyConstraint):
> event.listen(Base.metadata, "after_create",
> AddConstraint(constraint))
> event.listen(Base.metadata, "before_drop",
> DropConstraint(constraint))
> 
> 
> e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
> 
> 
> Base.metadata.create_all(e)
> 
> Base.metadata.drop_all(e)
> 
> 
> 
> 
> 
> 
> >
> > On Friday, December 11, 2015 at 3:38:16 PM UTC-8, Michael Bayer wrote:
> >
> >
> >
> > On 12/11/2015 05:25 PM, Gerald Thibault wrote:
> > > I am basing my question off the code
> > > at
> >
> http://docs.sqlalchemy.org/en/latest/orm/relationship_persistence.html#ro
> ws-that-point-to-themselves-mutually-dependent-rows
> >
> >  > l#rows-that-point-to-themselves-mutually-dependent-rows>,
> >
> > > with a few changes.
> > >
> > > I am trying to handle a situation very similar to the one in that
> > > example, with 2 classes having the same relationship types as
> > those in
> > > the example. However, I would like to create the instances without
> > using
> > > the relationships, and instead populate the fk values directly. The
> > > example uses this, and it works.
> > >
> > > w1 = Widget(name='somewidget')
> > > e1 = Entry(name='someentry')
> > > w1.favorite_entry = e1
> > > w1.entries = [e1]
> > > session.add_all([w1, e1])
> > > session.commit()
> > >
> > > I would like to do this:
> > >
> > > w1 = Widget(widget_id=1, favorite_entry_id=1, name='somewidget')
> > > e1 = Entry(entry_id=1, widget_id=1, name='someentry')
> > > session.add_all([w1, e1])
> > > session.commit()
> > >
> > > The reason I am doing it this way is because I am operating from a
> > JSON
> > > fixture file, and trying to populate a database for unit testing. The
> > > method used in the example works perfectly, but trying to do it my
> > way
> > > yields:
> > >
> > > sqlalchemy.exc.IntegrityError: (IntegrityError) (1452, 'Cannot add or
> > > update a child row: a foreign key constraint fails (`test`.`widget`,
> > > CONSTRAINT `fk_favorite_entry` FOREIGN KEY (`favorite_entry_id`)
> > > REFERENCES `entry` (`entry_id`))') 'INSERT INTO widget (widget_id,
> > > favorite_entry_id, name) VALUES (%s, %s, %s)' (1, 1, 'somewidget')
> > >
> > > I understand the post_update option is on the relationship, and
> > not the
> > > column, so it has no effect on column population. Is there an
> > > alternative method to have that column populated separately via a
> > second
> > > statement, similar to the post_update functionality?
> >
> > Sure, you have to do it manually:
> >
> > w1 = Widget(widget_id=1, favorite_entry_id=None,
> name='somewidget')
> > e1 = Entry(entry_id=1, widget_id=1, 

RE: [sqlalchemy] Explicitly name the not null constraint for an Oracle primary key

2015-12-04 Thread Gombas, Gabor
Hi,

We solved this a long time ago using an extra function, which gets called after 
SQLAlchemy did its part. The function queries all constraints, and renames the 
ones which look like NOT NULL constraints having a name starting with SYS_. 
This method works well in practice, and since we have full control over schema, 
we don't have to worry about corner cases (e.g. differentiating between 
table-level CHECK vs. "real" NOT NULL constraints). 

I have started working on integrating this into SQLAlchemy, and I have a patch 
which allows you to write

Column('foo', Integer, nullable=NotNull(name='foo_nn'))

And that ends up generating

   CREATE TABLE ... (foo INTEGER CONSTRAINT foo_nn NOT NULL ...)

It also supports using the naming convention machinery to automatically 
generate names for all NOT NULL constraints which do not have an explicit name. 
The change is a bit complex compared to targeting just a single dialect, but I 
really wanted the integration with MetaData.naming_convention, and that 
required some extra hoops. I plan to contribute the patch, but that may take 
some time.

Gabor

> -Original Message-
> From: sqlalchemy@googlegroups.com
> [mailto:sqlalchemy@googlegroups.com] On Behalf Of Thijs Engels
> Sent: 04 December 2015 13:25
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Explicitly name the not null constraint for an Oracle
> primary key
> 
> Dear all,
> 
> For maintenance purposes I have aiming to ensure that ALL database
> constraint have an explicit name (as recommended here as well:
> http://alembic.readthedocs.org/en/latest/naming.html)
> 
> With the NOT NULL constraints this can be done by adding an explicit
> CheckConstraint. However at least for Oracle the database ends up with one
> constraint which is added by the Primary Key. Is there a way to either avoid
> this automated creation (and add the NOT NULL constraint
> manually) or if not; is it possible to explicitly provide a name for this
> constraint?
> 
> class Example(Base):
> __tablename__ = "example"
> __table_args__ = (
> PrimaryKeyConstraint("entry_id", name="example_pk"),
> CheckConstraint('"ENTRY_ID" IS NOT NULL',
> name="example_entry_nn"))
> 
> id = Column("entry_id", Integer)
> name = Column(String(64))
> 
> (not claiming the explicit not null constraint is the prettiest, but it does 
> give
> the result I am after)
> 
> When looking at this database SQL Developer indicates these constraints:
> 
> EXAMPLE_PK Primary_Key
> EXAMPLE_ENTRY_NN   Check"ENTRY_ID" IS NOT NULL
> SYS_C00109085  Check"ENTRY_ID" IS NOT NULL
> 
> Question is how to get rid of the constraint with the generated name
> (SYS_C00109085)...
> 
> Thanks in advance,
> 
> Thijs
> 
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies; do not disclose, use or act 
upon the information; and notify the sender immediately. Mistransmission is not 
intended to waive confidentiality or privilege. Morgan Stanley reserves the 
right, to the extent permitted under applicable law, to monitor electronic 
communications. This message is subject to terms available at the following 
link: http://www.morganstanley.com/disclaimers. If you cannot access these 
links, please notify us by reply message and we will send the contents to you. 
By messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


RE: [sqlalchemy] "How can I know which fields cause IntegrityError when inserting into a table with multiple unique fields?"

2015-02-24 Thread Gombas, Gabor
There may be better ways if your backend supports it. E.g. if you’re using a 
recent version of PostgreSQL, then diagnostics information is attached to the 
exception being thrown by psycopg2 (which is available under the .orig property 
of the SQLAlchemy exception). See the documentation of 
psycopg2.extensions.Diagnostics and the documentation of libpq. I don’t know if 
there are enough details in the diagnostics to support your use case.

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Maurice Schleußinger
Sent: 24 February 2015 11:42
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] "How can I know which fields cause IntegrityError 
when inserting into a table with multiple unique fields?"

Merge wont work in my case, because I commit in bulk and also use multiple 
processes. By the time the commit is triggered (e.g. as soon as there are 1000 
new objects to commit) an other process could have committed an object already 
which triggers an IntegrityError. The same applies for the Unique Object 
recipes if I'm not mistaken. So I basically have to check every object as I 
commit it which defies the idea of an bulk commit. Thats why I want to get the 
key which triggers the IntegrityError, update that object and commit all other 
elements again in bulk (and do this recursively if there are still other 
conflicts). To archive this I have no other way but to parse the exception or 
am I missing something?

Alternatively I thought about using raw SQL statements (INSERT IGNORE), but I 
would have to disable relationships for that to work and possibly create many 
new problems...

On Monday, February 23, 2015 at 9:11:28 PM UTC+1, Michael Bayer wrote:



On Feb 23, 2015, at 8:49 AM, Maurice Schleußinger 
> wrote:
Thanks for the reply!

Since I create objects concurrently I can not predict and therefore not 
pre-select all rows which could possibly provoke IntegrityErrors.
On the other hand Session.merge() seams to provide the functionality which 
could resolve my problem. In my setup many processes create a number of objects 
which could occur multiple times (in other words throw an IntegrityError on 
commit). That can happen in one process (which I can handle otherwise), in 
between multiple processes and between a process and the consistent state in 
the database (which is my main problem ATM).

I just read the corresponding part in the SQLAlchemy 
docs.
 So if I use Session.merge() with the load=True argument (instead of 
Session.add()) the session should create the corresponding object if it does 
not exists, avoid duplicates within one session and also update an existing 
entry in the database?

Also it seems that merging only works for primary keys. So if I had the problem 
with an unique key, I still would have to parse the exception, right?

Ah ok, yes session.merge() works this way, but it does emit a SELECT.   If you 
were looking to emit less SQL I was referring to an operation like MySQL's 
REPLACE statement.

But if merge works for you then stick with that.   You can write your own 
function that does something similar for other fields that are not the PK.
The recipe below is one way to do this.

https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/UniqueObject





On Thursday, February 19, 2015 at 5:29:56 PM UTC+1, Michael Bayer wrote:

Maurice Schleußinger mailto:m.schle...@gmail.com>> wrote:

> Is there no other way?
>
> http://stackoverflow.com/questions/27635933/how-can-i-know-which-fiels-cause-integrityerror-when-inserting-into-a-table-with/27884632#27884632
>
> Parsing an exception with regex just doesn't feel right…


The two other ways are that you can pre-select the rows, or use a MERGE / 
upsert approach (you’d have to roll that yourself).
--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe Sent from my iPhone
To post to this group, send email to sqlal...@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to 
sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received t

RE: [sqlalchemy] Oracle index not used on SQLAlchemy prepared statement

2014-05-13 Thread Gombas, Gabor
Hi,

Maybe you’ve run into this: 
https://groups.google.com/forum/#!topic/sqlalchemy/8Xn31vBfGKU

Gabor

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Thierry Florac
Sent: 12 May 2014 23:13
To: sqlalchemy
Subject: [sqlalchemy] Oracle index not used on SQLAlchemy prepared statement

Hi,

I use Python and SQLAlchemy to access an Oracle 11 database. As far as I think, 
SQLAlchemy always use prepared statements.

On a huge table (4 millions records), correctly indexed, SQLAlchemy filters 
queries doesn't use the index, so doing a full table scan is very slow ; using 
the same SQL code in a "raw" SQL editor (SQLplus) make Oracle use the index 
with good performances.

We tried to add "+index" hints on requests, without effect on Oracle execution 
path which still doesn't want to use the index.

Any idea ? Is it possible to really force Oracle to use an index, or to make 
SQLAlchemy to not use prepared statements ??

Best regards,
Thierry
--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to 
sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


RE: [sqlalchemy] SQLite temporary tables

2013-11-22 Thread Gombas, Gabor
That works, thanks. Currently, I'm using __declare_last__() to modify 
__table__.schema (SQLite) or __table__._prefixes (Oracle, PostgreSQL), once the 
dialect is known. For the latter two I'm also monkey-patching 
post_create_table(), which seemed easier to do than to completely re-implement 
table creation, even if it does not look that nice.

> -Original Message-
> From: sqlalchemy@googlegroups.com
> [mailto:sqlalchemy@googlegroups.com] On Behalf Of Michael Bayer
> Sent: 22 November 2013 16:17
> To: sqlalchemy@googlegroups.com
> Subject: Re: [sqlalchemy] SQLite temporary tables
> 
> 
> On Nov 22, 2013, at 6:24 AM, Gombas, Gabor
>  wrote:
> 
> > Hi,
> >
> > How should one create SQLite temporary tables using SQLAlchemy? The
> following does not work:
> >
> > from sqlalchemy import create_engine, Column, String
> > from sqlalchemy.ext.declarative import declarative_base
> > from sqlalchemy.orm import sessionmaker
> > from sqlalchemy.schema import CreateTable
> >
> > Base = declarative_base()
> >
> > class TempTest(Base):
> > __tablename__ = 'temp_test'
> > __table_args__ = {'schema': 'temp'}
> > name = Column(String, primary_key=True)
> >
> > engine = create_engine('sqlite:tmp/testdb', echo=True)
> > Session = sessionmaker(bind=engine)
> > session = Session()
> >
> >
>   TempTest.__table__.create(session.bind.execution_options(autoco
> mmit=False))
> > session.add(TempTest(name='foo'))
> > session.flush() # Fails
> >
> > Although I see the "CREATE TABLE" being emitted, INSERT fails saying the
> table does not exist. If instead of calling TempTest.__table__.create() I use
> the underlying DBAPI connection, then I can make it work, but it's ugly:
> 
> you use create() but use the session's current transaction-bound connection:
> 
> TempTest.__table__.create(session.connection())
> 
> I'm assuming you're using @compiles somewhere to get the "TEMPORARY"
> syntax in there.
> 





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers. If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] SQLite temporary tables

2013-11-22 Thread Gombas, Gabor
Hi,

How should one create SQLite temporary tables using SQLAlchemy? The following 
does not work:

from sqlalchemy import create_engine, Column, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import CreateTable

Base = declarative_base()

class TempTest(Base):
__tablename__ = 'temp_test'
__table_args__ = {'schema': 'temp'}
name = Column(String, primary_key=True)

engine = create_engine('sqlite:tmp/testdb', echo=True)
Session = sessionmaker(bind=engine)
session = Session()


TempTest.__table__.create(session.bind.execution_options(autocommit=False))
session.add(TempTest(name='foo'))
session.flush() # Fails

Although I see the "CREATE TABLE" being emitted, INSERT fails saying the table 
does not exist. If instead of calling TempTest.__table__.create() I use the 
underlying DBAPI connection, then I can make it work, but it's ugly:

dbapi_con = session.connection().connection.connection
dbapi_con.execute(str(CreateTable(TempTest.__table__, 
bind=session.bind)))
session.add(TempTest(name='foo'))
session.flush() # Succeeds

Is there a nicer way to do this?

Thanks,
Gabor


Gabor Gombas   
Morgan Stanley | Enterprise Infrastructure   
Lechner Odon fasor 8 | Floor 07   
Budapest, 1095   
Phone: +36 1 881-4376   
gabor.gom...@morganstanley.com   
   
   
Be carbon conscious. Please consider our environment before printing this 
email.
   




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers. If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] testing for an association proxy (possible bug and patch included)

2013-08-23 Thread Gombas, Gabor (IT)
On Fri, Aug 23, 2013 at 12:11:39PM -0700, Jonathan Vanasco wrote:

> i think a simple fix could be something like this ( line 240, 
> sqlalchemy/ext/associationproxy.py 
> )
> 
> if self.scalar:
> -if not getattr(obj, self.target_collection)
> -return self._scalar_get(getattr(obj, self.target_collection))
> else:
> 
> if self.scalar:
> +proxied = getattr(obj, self.target_collection)
> +if not proxied :
> +return None
> +return self._scalar_get(proxied)
>   else:

We're monkey-patching AssociationProxy.__get__ with the same change
since SQLA 0.5.x, so it would be nice to get it applied upstream...
Maybe in 0.9?

Gabor

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [sqlalchemy] The cost of defer()

2013-07-19 Thread Gombas, Gabor
Thanks, I was just able to test the patch. With the patch, my query with the 
extra defer() options added is just a bit faster than without those options. So 
while using defer() is not a huge win in my case, it at least no longer causes 
an unexpected slowdown.

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: 13 July 2013 22:46
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] The cost of defer()

anyway, with some profiling of loading 1000 rows with six deferred cols, the 
function count with the defer was over 50K and without the defer around 37K; 
the patch is now committed and with the defer it's at 32K, so a bit less than 
that of loading the data, as it should be (more would be better sure, but this 
is a decent improvement).  The change to get that much wasn't that big a deal 
so this is in 0.8.


On Jul 11, 2013, at 2:04 PM, Michael Bayer 
mailto:mike...@zzzcomputing.com>> wrote:


please try out this patch:

http://www.sqlalchemy.org/trac/attachment/ticket/2778/2778.patch


which refactors this particular system to not require the production of a new 
object per instance, which is the slowest part of this, and also inlines the 
work of assembling the callable.  This should give you 50% or more method call 
improvement.  if this is enough, this might be OK for 0.8.





On Jul 11, 2013, at 1:40 PM, Michael Bayer 
mailto:mike...@zzzcomputing.com>> wrote:


well what kind of data are we talking about?  defer()'s use case was for binary 
large objects and such, fields that are many K/Megs in size.  if you're 
deferring a bunch of ints, then yes it's not optimized very well for that.

Half of the overhead could be easily fixed here, creating those 
LoadDeferredColumns objects could be offloaded to a later point.The other 
half, setting up that callable, I'd have to spend some time reviewing the use 
cases here.  The difference between an attribute that is "deferred" vs. one 
that is "expired" is that if you access some other expired attribute, the 
"deferred" attribute will still not load - because the use case is, you really 
don't want this BLOB column to load unless you touch it specifically.   So to 
get that instruction into the state, "don't load these keys even on an 
unexpire", uses some kind of method call on every state.
InstanceState._set_callable could be inlined more here to do less work, 
instructions up to the loader process just to populate a key in a dictionary 
maybe, though these reorganizations can destabilize the code.   it's not 
something I'd be comfortable doing in 0.8, the ticket I created 
(http://www.sqlalchemy.org/trac/ticket/2778) has any potential work here for 
0.9.

The other way to go here is to provide a query option that explicitly delivers 
the attribute as "expired" as opposed to "deferred", looking at how that works 
right now I can give you the recipe below, but it still involves a function 
call per column so that the InstanceState knows the attribute is "expired".


from sqlalchemy.orm.strategies import DeferredOption, DeferredColumnLoader


class DontLoadColumnOption(DeferredOption):
def get_strategy_class(self):
return NoColumnLoader


class NoColumnLoader(DeferredColumnLoader):
def create_row_processor(self, context, path, mapper, row, adapter):
if not self.is<http://self.is/>_class_level:
def set_deferred_for_local_state(state, dict_, row):
state.callables[self.key] = state
return set_deferred_for_local_state, None, None
else:
return super(NoColumnLoader, self).create_row_processor(
context, path, mapper, row, adapter)

if __name__ == '__main__':
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class A(Base):
__tablename__ = 'a'

id = Column(Integer, primary_key=True)

x = Column(Integer)
y = Column(Integer)
z = Column(Integer)
q = Column(Integer)

e = create_engine("sqlite://", echo=True)
Base.metadata.create_all(e)
s = Session(e)
s.add_all([
A(x="x%d" % i, y="y%d" % i, z="z%d" % i, q="q%d" % i)
for i in xrange(1000)
])
s.commit()
s.close()

loaded = s.query(A).options(DontLoadColumnOption("y"),
DontLoadColumnOption("z")).order_by(A.id).all()

for a in loaded:
    assert 'y' not in a.__dict__
assert 'z' not in a.__dict__
assert 'x' in a.__dict__
assert 'q' in a.__dict__

assert a.z == "z%d" % (a.id - 1), a.z













On Jul 11,

RE: [sqlalchemy] The cost of defer()

2013-07-11 Thread Gombas, Gabor
I did need the objects, not just the raw data, otherwise I'd had to duplicate a 
bunch of existing code which expected full-blown objects to operate on. 
Modifying the mapper is not really an option unless the majority of the users 
have the same requirements, otherwise I end up having to add a huge amount of 
undefer() calls everywhere else (and the query storm caused by missing an 
undefer() would be much more painful). Maybe the documentation of defer() could 
mention that the use of the option can in fact reduce performance, because it's 
not intuitive that loading unneeded data is cheaper than not loading it.

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: 11 July 2013 15:34
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] The cost of defer()

the path would be to figure out if the logic of a per-query defer option can 
somehow be linked to the attribute when it hits its normal refresh logic - if 
those attribute were set up as "deferred" at the mapper config level (where 
deferred is usually used), you wouldn't see this overhead since the deferred 
loader would be the default callable.But if you only need a few attributes 
why not just go the other way and query for those attributes directly?   that 
would save you way more overhead than even if we removed all overhead from 
defer(), since the most expensive thing is all the mapper identity map logic 
that takes place when full entities are loaded.


On Jul 11, 2013, at 4:02 AM, "Gombas, Gabor" 
mailto:gabor.gom...@morganstanley.com>> wrote:



Hi,

I wrote a query joining a couple of tables, returning over a hundred thousand 
rows. Since I only needed to access a couple of the attributes of the returned 
objects for this specific use case, I thought to use a dozen or so 
Query.options(defer(...)) calls to avoid loading the unneeded columns. But to 
my surprise, the query became much slower. Profiling attributed almost all the 
extra time to set_deferred_for_local_state() and LoadDeferredColumns.__init__():

   827855   15.6680.000   27.0680.000 
.../SQLAlchemy-0.8.1-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py:184(set_deferred_for_local_state)
   827855   10.5240.000   10.5240.000 
.../SQLAlchemy-0.8.1-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py:259(__init__)

If defer() is so expensive, then it is not very useful. Would it be possible to 
make it cheaper?

Gabor




NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.




--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to 
sqlalchemy@googlegroups.com<mailto:sqlalchemy@googlegroups.com>.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
sqlalchemy+unsubscr...@googlegroups.com<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to 
sqlalchemy@googlegroups.com<mailto:sqlalchemy@googlegroups.com>.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.






NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley res

[sqlalchemy] The cost of defer()

2013-07-11 Thread Gombas, Gabor
Hi,

I wrote a query joining a couple of tables, returning over a hundred thousand 
rows. Since I only needed to access a couple of the attributes of the returned 
objects for this specific use case, I thought to use a dozen or so 
Query.options(defer(...)) calls to avoid loading the unneeded columns. But to 
my surprise, the query became much slower. Profiling attributed almost all the 
extra time to set_deferred_for_local_state() and LoadDeferredColumns.__init__():

   827855   15.6680.000   27.0680.000 
.../SQLAlchemy-0.8.1-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py:184(set_deferred_for_local_state)
   827855   10.5240.000   10.5240.000 
.../SQLAlchemy-0.8.1-py2.6-linux-x86_64.egg/sqlalchemy/orm/strategies.py:259(__init__)

If defer() is so expensive, then it is not very useful. Would it be possible to 
make it cheaper?

Gabor





NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Single table inheritance and __table_args__

2013-04-09 Thread Gombas, Gabor (IT)
Hi,

What is the recommended method of specifying constraints on columns
added by a subclass using single-table inheritance? This does not work,
I get a KeyError for "col_a":

class BaseClass(Base)
__table__ = "base"
__table_args__ = (Index("base_col"),
  UniqueConstraint("col_a", "col_b"))

base_col = Column(Integer)

class SubClass(BaseClass):
col_a = Column(Integer)
col_b = Column(Integer)

SQLA does not allow __table_args__ placed on SubClass. I can do a
workaround by using BaseClass.__table__.append_constraint() after
defining SubClass, but mixing different ways of defining constraints
does not look nice.

Gabor

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




RE: [sqlalchemy] SQLite rollback() fails

2012-10-03 Thread Gombas, Gabor
I've tested the 0.7 & 0.8 snapshots, and both work fine now. Thanks!

From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: 03 October 2012 17:20
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] SQLite rollback() fails


On Oct 3, 2012, at 10:34 AM, Michael Bayer wrote:




Catching both of these is easy enough and a patch is added to #2583: 
http://www.sqlalchemy.org/trac/ticket/2583

The SQLite case you describe isn't exactly like either of these; if the object 
were flushed by the session and received the same PK, that case is already 
handled since the object would be in ._new.  So my only guess is that, that's 
not the trigger of the issue.So at the very least you need to try that 
patch and see if it clears things up.

Assuming that's the fix, I don't know that there's a clean way to patch this 
in, except for patching your install or monkeypatching a new Session class in.  
   I should have it committed in both 0.7/0.8 within 30, unless I hit some 
snags in the tests.


OK those are in tip for default + 0.7, so you can get those fixes now from the 
snapshot releases.
--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to 
sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers. If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SQLite rollback() fails

2012-10-03 Thread Gombas, Gabor
Hi,

I've run into a rather interesting issue, where calling session.rollback() 
dies. The error I get with SQLA 0.7 is:

: Can't attach instance ; another instance with key (, (342,)) is 
already present in this session.

Traceback:

   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:672:rollback
   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:386:rollback
   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:417:_rollback_impl
   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:274:_restore_snapshot
   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:1620:_update_impl
   sqlalchemy-rel_0_7/lib/sqlalchemy/orm/session.py:1648:_attach

0.8 gives a slightly different message, but I think it is essentially the same:

: A conflicting state is already present in 
the identity map for key (, (342,))

Traceback:

sqlalchemy-default/lib/sqlalchemy/orm/session.py:637:rollback
sqlalchemy-default/lib/sqlalchemy/orm/session.py:346:rollback
sqlalchemy-default/lib/sqlalchemy/orm/session.py:377:_rollback_impl
sqlalchemy-default/lib/sqlalchemy/orm/session.py:233:_restore_snapshot
sqlalchemy-default/lib/sqlalchemy/orm/session.py:1578:_update_impl
sqlalchemy-default/lib/sqlalchemy/orm/identity.py:115:add

What I see when turning on query logging:


-  The Foo object with primary key 342 is loaded (using a non-primary 
mapper, if that makes any difference), and then deleted. The session is flushed.

-  A new object is created (this time using the primary mapper), and 
during INSERT, SQLite assigns the same primary key to it as the previously 
deleted object had (I can see that from subsequent SELECT statements 
referencing the new object; it would be nice if SQLA logged the primary key 
after an INSERT at debug level)

-  An application error (unrelated to SQLAlchemy) occurs, which causes 
rollback() to be called

I have a reliable reproducer, but it is the unit test of a large application, 
so there are lots of possibly unrelated things going on. I've spent two days 
coming up with a standalone reproducer, but I've failed, that's why I'm sending 
this mail instead of opening a ticket.

Any ideas?

Gabor

--
NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or 
views contained herein are not intended to be, and do not constitute, advice 
within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and 
Consumer Protection Act. If you have received this communication in error, 
please destroy all electronic and paper copies and notify the sender 
immediately. Mistransmission is not intended to waive confidentiality or 
privilege. Morgan Stanley reserves the right, to the extent permitted under 
applicable law, to monitor electronic communications. This message is subject 
to terms available at the following link: 
http://www.morganstanley.com/disclaimers. If you cannot access these links, 
please notify us by reply message and we will send the contents to you. By 
messaging with Morgan Stanley you consent to the foregoing.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.