[sqlalchemy] Re: Bulk creation of columns

2011-09-21 Thread pravin battula
> You need to create new Column objects for each table because the table 
> objects take ownership of the passed in columns.
I think that is what i'm doing,creating new column objects for 2
columns and first appending it to the table object and then trying to
create those columns using create_column,please correct me if i'm
wrong.




On 21 Sep, 20:24, AM  wrote:
> On Sep 21, 2011, at 6:30 AM, pravin battula wrote:
>  Hi,
>
> > How can i create columns in bulk?
> > I tried as below but doesn't work.
>
> >    migrate_engine = create_engine('mysql://root:root@localhost/
> > payroll', echo=False)
> >    metadata = MetaData(bind = migrate_engine)
> >    metadata.reflect(bind = migrate_engine, schema = 'payroll')
>
> >    tableObj = metadata.tables.get('test.salary')
>
> >    colList =
> > [Column('description',String(100)),Column('information',String(50))]
>
> >    tableObj.append_column(colList)
> >    tableObj.create_column(colList)
>
> You need to create new Column objects for each table because the table 
> objects take ownership of the passed in columns.
>
> Cheers,
> M

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] ORM or Core

2011-09-21 Thread Michael Bayer

On Sep 21, 2011, at 1:46 AM, RedBaron wrote:

> Hi,
> I am a newbie to SQLAlchemy. While reading the docs on the sqlalchemy
> site, I was a bit confused about ORM and Core.
> From what I understand, we can define table and class mapping either
> through ORM (defining a class that extends Base and so on) or we can
> directly use Core components (Define Table's using SQL expression and
> then define explicit mappings to python classes).

class mapping is only in the ORM.   the table definition is in fact always 
based on a Core construct, the ORM just maps your class onto that table def, 
where Declarative provides a more succinct syntax.


> In my project, I need to define mappings for tables with composite
> primary/foreign keys. I was wondering which of the two approaches will
> be better (easier?).

it sort of depends on where you're coming from.   If you're a DB purist you'd 
probably find it much easier to work with the Core, for starters.   The ORM has 
more of a conceptual leap involved, so it depends on how willing you are to do 
things in a certain way (the ORM has more of an opinion basically).

> Given that size of DB is huge (a few mega bytes
> per day), which approach has a performance incentive?

This question comes up a lot, but the funny thing is that the size of the 
database is not much of a factor, its how many rows of data you need to be 
moving along that impacts performance.  Both the ORM and the Core are going to 
issue the same kinds of SELECT constructs and such. Core is going to be 
able to pull rows down extremely quickly compared to the ORM since it doesn't 
need to do much else with them, whereas the ORM needs to constantly be 
concerned with how rows coming in and going out are organized with respect to 
the Python objects they represent - if you measure this overhead in a direct 
matchup to the Core, it will seem very significant, because it is.  But 
even with all of that, an ORM app can potentially perform extremely well 
because you're building on an approach that can be more savvy about when to 
select data and when not to, and the ORM has some patterns to work with second 
level caches as well.   All of this can be built with the Core too but the 
extra work there is that you're working with more rudimental components.   The 
ORM is really mostly about reducing the amount of ad-hoc code it takes to 
accomplish something.   A more complex schema is probably easier to work with 
when the ORM is in use as a lot of the complexity will be handled.


> Also which of
> the two approaches would be easier to integrate with web development
> frameworks (Grok, Pyramid etc)?

the ORM is usually easier because that's what most people use and it presents 
an existing structure that corresponds nicely to a web request, the Session.
 Of course even with the Session you can send any core selectable unit to it 
right through execute() so the line between ORM and Core is very fluid.So 
the ORM is usually where people start since it doesn't prevent you from doing 
things in a "Core" style as needed.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: object_session returns None, but state still has session_id

2011-09-21 Thread Michael Bayer

On Sep 21, 2011, at 11:03 AM, Ben Ford wrote:

> I've now tried a make_transient so my code looks like this:
> 
> obj = 
> ses = object_session(obj)
> if not ses:
>make_transient(obj)
>ses = Session()
>ses.add(obj)
> ses.commit()
> 
> I still get the same exception. 
> 
> I guess my other option now is to do:
> 
> state = attributes.instance_state(obj)
> state.detach()
> 
> Would that work do you think? BTW would this be considered a bug, should I 
> raise a bug report on this?

I think you're hitting an edge case here that's the result of trying to use 
objects across sessions where you're just allowing sessions to fall out of 
scope. I'm not sure the behavior as is is how it should be.  The source to 
_attach() implies that "implicit detachment" was how this worked at some point, 
or intended to work but there's no tests - I was under the impression that the 
object is in fact implicitly detached, but that doesn't occur here without a 
small change.http://www.sqlalchemy.org/trac/ticket/2281 sums up the 
behavior.

The questions I'd start with, and keep in mind this is towards establishing a 
use case that would drive the desired behavior, is what is preventing you from 
keeping a Session in place, or calling close() on it when completed, or using 
new objects for a new Session instead of recycling those which are hanging 
around from some previous operation ?That said I'm leaning towards that 
patch, but probably only in 0.7 as it is a slight behavioral change.




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] safe delete recipe

2011-09-21 Thread Andrei Chirila
Hello Michael,

I changed delete as you suggested.

def delete(self, synchronize_session='evaluate'):
columns = self.column_descriptions
#  don't know to delete on multiple tables in
# the same statement
assert len(columns) == 1
entity = columns[0]
if 'type' in entity and \
   hasattr(entity['type'], 'deleted_at'):
self.update({'deleted_at': int(time.time())},
synchronize_session=synchronize_session)
return
super(DeletedAtQuery, self).\
delete(synchronize_session=synchronize_session)

I hope this covers all the cases.

Thank you very much,
Andrei Chirila


On Wed, Sep 21, 2011 at 4:52 PM, Michael Bayer wrote:

>
> On Sep 21, 2011, at 9:53 AM, Andrei Chirila wrote:
>
> Hello,
>
> I looked closely at Query implementation and I ended up with this:
>
>
> def delete(self, synchronize_session='evaluate'):
> context = self._compile_context()
> assert len(context.statement.froms) == 1
> if 'deleted_at' in context.statement.froms[0].columns:
> self.update({'deleted_at': int(time.time())},
> synchronize_session=synchronize_session)
> return
> super(DeletedAtQuery, self).\
> delete(synchronize_session=synchronize_session)
>
> I'm using _compile_context private method, and I would like to know if it's
> anything I could improve in order to get to a complete implementation of
> safe delete.
>
>
> OK so if it were me, I'd look instead at the entities which the Query is
> against and look for a deleted_at attribute.The most public API way to
> get at the entities being queried is via the query.column_descriptions
> accessor:
> http://www.sqlalchemy.org/docs/orm/query.html#sqlalchemy.orm.query.Query.column_descriptions.
>From that you'd pretty much have a list of length 1 since you're doing
> query(entity).delete(), you'd take a  look at hasattr(entities[0]['type'],
> 'deleted_at'). Would be cleaner than the compile_context approach.
>
>
>
>
>
> Thank you,
> Andrei Chirila
>
>
> On Tue, Sep 20, 2011 at 4:18 PM, Michael Bayer 
> wrote:
>
>>
>> On Sep 20, 2011, at 10:12 AM, Andrei Chirila wrote:
>>
>> Hello Michael,
>>
>> Thank you :)
>>
>> I changed the code according to your indications:
>>
>> def paranoid_delete(session, flush_context, instances):
>> """ Intercepts item deletions (single item deletions)
>> and, if there is a field deleted_at for that
>> object, it's updating deleted at instead of really
>> deleting the object """
>> # print "current session", session
>> # print "context", flush_context
>> # print "dirty", session.dirty
>> # print "deleted", session._deleted
>> for obj in session.deleted:
>> # print ">> deleted object:", instance, " -> ", obj
>> if hasattr(obj, 'deleted_at'):
>> print "moving object back: ", obj
>> obj.deleted_at = int(time.time())
>> session.add(obj)
>>
>> and it seems to do what it was intended.
>>
>> Do you know where could I add a hook for before_bulk_delete?
>>
>>
>> Well since you're subclassing Query I'd add an overriding delete() method
>> to your LimitingQuery, do your hook in there, then call the delete() method
>> of Query itself.
>>
>>
>>
>>
>> Thank you,
>> Andrei Chirila
>>
>>
>> On Tue, Sep 20, 2011 at 3:59 PM, Michael Bayer 
>> wrote:
>>
>>>
>>> On Sep 20, 2011, at 8:33 AM, Andrei Chirila wrote:
>>>
>>> > Hello,
>>> >
>>> > I'm working at implementing a safe delete on my models (having a
>>> > deleted_at field which is checked and setting a deleted_at field when
>>> > a model is deleted).
>>> >
>>> > I looked over the LimitingQuery recipe and at
>>> > SessionEvents.before_flush and bellow is my solution. As I'm kind of
>>> > new SqlAlchemy user, I would like to ask if anyone can see any obvious
>>> > mistake in the following code (I'm sure I missed bulk deletion and
>>> > session explunge, but I don't really know how to integrate them).
>>>
>>> I'd stay away from identity_map.add(object), as that skips a lot of
>>> bookkeeping that occurs within the Session.   and definitely *never* modify
>>> anything with an _underscore on Session, your code becomes totally non
>>> portable to even small changes in SQLAlchemy (which occur frequently as we
>>> have new releases, bugfixes you'll want, etc).The public (read-only)
>>> accessor for the "deleted" collection is Session.deleted.
>>>
>>> If you have an object marked for deletion, you can re-add it,
>>> session.add(object), will remove it from the deleted list.
>>>
>>> If there's something you need the Session to do that there is not a
>>> public API for, ask me on the list here first, as I'd rather ensure there
>>> are maintainable paths for use cases.
>>>
>>>
>>>
>>>
>>> >
>>> > Any feedback would be appreciated.
>>> >
>>> > Thank you,
>>> > Andrei Chirila
>>> >
>>> > from sqlalc

Re: [sqlalchemy] Bulk creation of columns

2011-09-21 Thread AM

On Sep 21, 2011, at 6:30 AM, pravin battula wrote:

> Hi,
> 
> How can i create columns in bulk?
> I tried as below but doesn't work.
> 
>migrate_engine = create_engine('mysql://root:root@localhost/
> payroll', echo=False)
>metadata = MetaData(bind = migrate_engine)
>metadata.reflect(bind = migrate_engine, schema = 'payroll')
> 
>tableObj = metadata.tables.get('test.salary')
> 
>colList =
> [Column('description',String(100)),Column('information',String(50))]
> 
>tableObj.append_column(colList)
>tableObj.create_column(colList)

You need to create new Column objects for each table because the table objects 
take ownership of the passed in columns.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: object_session returns None, but state still has session_id

2011-09-21 Thread Ben Ford
I've now tried a make_transient so my code looks like this:

obj = 
ses = object_session(obj)
if not ses:
   make_transient(obj)
   ses = Session()
   ses.add(obj)
ses.commit()

I still get the same exception. 

I guess my other option now is to do:

state = attributes.instance_state(obj)
state.detach()

Would that work do you think? BTW would this be considered a bug, should I 
raise a bug report on this?

Cheers,
Ben

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/vnL1PJednqcJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] safe delete recipe

2011-09-21 Thread Michael Bayer

On Sep 21, 2011, at 9:53 AM, Andrei Chirila wrote:

> Hello,
> 
> I looked closely at Query implementation and I ended up with this:
> 
> 
> def delete(self, synchronize_session='evaluate'):
> context = self._compile_context()
> assert len(context.statement.froms) == 1
> if 'deleted_at' in context.statement.froms[0].columns:
> self.update({'deleted_at': int(time.time())},
> synchronize_session=synchronize_session)
> return
> super(DeletedAtQuery, self).\
> delete(synchronize_session=synchronize_session)
> 
> I'm using _compile_context private method, and I would like to know if it's 
> anything I could improve in order to get to a complete implementation of safe 
> delete.

OK so if it were me, I'd look instead at the entities which the Query is 
against and look for a deleted_at attribute.The most public API way to get 
at the entities being queried is via the query.column_descriptions accessor:  
http://www.sqlalchemy.org/docs/orm/query.html#sqlalchemy.orm.query.Query.column_descriptions
 .   From that you'd pretty much have a list of length 1 since you're doing 
query(entity).delete(), you'd take a  look at hasattr(entities[0]['type'], 
'deleted_at'). Would be cleaner than the compile_context approach.




> 
> Thank you,
> Andrei Chirila
> 
> 
> On Tue, Sep 20, 2011 at 4:18 PM, Michael Bayer  
> wrote:
> 
> On Sep 20, 2011, at 10:12 AM, Andrei Chirila wrote:
> 
>> Hello Michael,
>> 
>> Thank you :)
>> 
>> I changed the code according to your indications:
>> 
>> def paranoid_delete(session, flush_context, instances):
>> """ Intercepts item deletions (single item deletions)
>> and, if there is a field deleted_at for that
>> object, it's updating deleted at instead of really
>> deleting the object """
>> # print "current session", session
>> # print "context", flush_context
>> # print "dirty", session.dirty
>> # print "deleted", session._deleted
>> for obj in session.deleted:
>> # print ">> deleted object:", instance, " -> ", obj
>> if hasattr(obj, 'deleted_at'):
>> print "moving object back: ", obj
>> obj.deleted_at = int(time.time())
>> session.add(obj)
>> 
>> and it seems to do what it was intended.
>> 
>> Do you know where could I add a hook for before_bulk_delete?
> 
> Well since you're subclassing Query I'd add an overriding delete() method to 
> your LimitingQuery, do your hook in there, then call the delete() method of 
> Query itself.
> 
> 
> 
>> 
>> Thank you,
>> Andrei Chirila
>> 
>> 
>> On Tue, Sep 20, 2011 at 3:59 PM, Michael Bayer  
>> wrote:
>> 
>> On Sep 20, 2011, at 8:33 AM, Andrei Chirila wrote:
>> 
>> > Hello,
>> >
>> > I'm working at implementing a safe delete on my models (having a
>> > deleted_at field which is checked and setting a deleted_at field when
>> > a model is deleted).
>> >
>> > I looked over the LimitingQuery recipe and at
>> > SessionEvents.before_flush and bellow is my solution. As I'm kind of
>> > new SqlAlchemy user, I would like to ask if anyone can see any obvious
>> > mistake in the following code (I'm sure I missed bulk deletion and
>> > session explunge, but I don't really know how to integrate them).
>> 
>> I'd stay away from identity_map.add(object), as that skips a lot of 
>> bookkeeping that occurs within the Session.   and definitely *never* modify 
>> anything with an _underscore on Session, your code becomes totally non 
>> portable to even small changes in SQLAlchemy (which occur frequently as we 
>> have new releases, bugfixes you'll want, etc).The public (read-only) 
>> accessor for the "deleted" collection is Session.deleted.
>> 
>> If you have an object marked for deletion, you can re-add it, 
>> session.add(object), will remove it from the deleted list.
>> 
>> If there's something you need the Session to do that there is not a public 
>> API for, ask me on the list here first, as I'd rather ensure there are 
>> maintainable paths for use cases.
>> 
>> 
>> 
>> 
>> >
>> > Any feedback would be appreciated.
>> >
>> > Thank you,
>> > Andrei Chirila
>> >
>> > from sqlalchemy.orm.query import Query
>> > import sqlalchemy.event as event
>> >
>> >
>> > class DeletedAtQuery(Query):
>> >""" Query adding a default condition that,
>> >if deleted_at exists, it should be NULL
>> >for the rows returned """
>> >
>> >def get(self, ident):
>> >return Query.get(self.populate_existing(), ident)
>> >
>> >def __iter__(self):
>> >return Query.__iter__(self.private())
>> >
>> >def from_self(self, *ent):
>> >return Query.from_self(self.private(), *ent)
>> >
>> >def private(self):
>> >mzero = self._mapper_zero()
>> >if mzero is not None:
>> >if hasattr(mzero.class_, 'deleted_at'):
>> >crit = mzero.class_.deleted_at == None
>> >
>> >return self.f

Re: [sqlalchemy] safe delete recipe

2011-09-21 Thread Andrei Chirila
Hello,

I looked closely at Query implementation and I ended up with this:

from sqlalchemy.orm.query import Query
import sqlalchemy.event as event


class DeletedAtQuery(Query):
""" Query adding a default condition that,
if deleted_at exists, it should be NULL
for the rows returned """

def get(self, ident):
return Query.get(self.populate_existing(), ident)

def __iter__(self):
return Query.__iter__(self.private())

def from_self(self, *ent):
return Query.from_self(self.private(), *ent)

def private(self):
mzero = self._mapper_zero()
if mzero is not None:
if hasattr(mzero.class_, 'deleted_at'):
crit = mzero.class_.deleted_at == None

return self.filter(crit)
return self

def delete(self, synchronize_session='evaluate'):
context = self._compile_context()
assert len(context.statement.froms) == 1
if 'deleted_at' in context.statement.froms[0].columns:
self.update({'deleted_at': int(time.time())},
synchronize_session=synchronize_session)
return
super(DeletedAtQuery, self).\
delete(synchronize_session=synchronize_session)


def paranoid_delete(session, flush_context, instances):
""" Intercepts item deletions (single item deletions)
and, if there is a field deleted_at for that
object, it's updating deleted at instead of really
deleting the object """
for obj in session.deleted:
if hasattr(obj, 'deleted_at'):
print "moving object back: ", obj
obj.deleted_at = int(time.time())
session.add(obj)


def configure_events(session):
""" Helper for applying before_flush hook """
event.listen(session, 'before_flush', paranoid_delete)


I'm using _compile_context private method, and I would like to know if it's
anything I could improve in order to get to a complete implementation of
safe delete.

Thank you,
Andrei Chirila


On Tue, Sep 20, 2011 at 4:18 PM, Michael Bayer wrote:

>
> On Sep 20, 2011, at 10:12 AM, Andrei Chirila wrote:
>
> Hello Michael,
>
> Thank you :)
>
> I changed the code according to your indications:
>
> def paranoid_delete(session, flush_context, instances):
> """ Intercepts item deletions (single item deletions)
> and, if there is a field deleted_at for that
> object, it's updating deleted at instead of really
> deleting the object """
> # print "current session", session
> # print "context", flush_context
> # print "dirty", session.dirty
> # print "deleted", session._deleted
> for obj in session.deleted:
> # print ">> deleted object:", instance, " -> ", obj
> if hasattr(obj, 'deleted_at'):
> print "moving object back: ", obj
> obj.deleted_at = int(time.time())
> session.add(obj)
>
> and it seems to do what it was intended.
>
> Do you know where could I add a hook for before_bulk_delete?
>
>
> Well since you're subclassing Query I'd add an overriding delete() method
> to your LimitingQuery, do your hook in there, then call the delete() method
> of Query itself.
>
>
>
>
> Thank you,
> Andrei Chirila
>
>
> On Tue, Sep 20, 2011 at 3:59 PM, Michael Bayer 
> wrote:
>
>>
>> On Sep 20, 2011, at 8:33 AM, Andrei Chirila wrote:
>>
>> > Hello,
>> >
>> > I'm working at implementing a safe delete on my models (having a
>> > deleted_at field which is checked and setting a deleted_at field when
>> > a model is deleted).
>> >
>> > I looked over the LimitingQuery recipe and at
>> > SessionEvents.before_flush and bellow is my solution. As I'm kind of
>> > new SqlAlchemy user, I would like to ask if anyone can see any obvious
>> > mistake in the following code (I'm sure I missed bulk deletion and
>> > session explunge, but I don't really know how to integrate them).
>>
>> I'd stay away from identity_map.add(object), as that skips a lot of
>> bookkeeping that occurs within the Session.   and definitely *never* modify
>> anything with an _underscore on Session, your code becomes totally non
>> portable to even small changes in SQLAlchemy (which occur frequently as we
>> have new releases, bugfixes you'll want, etc).The public (read-only)
>> accessor for the "deleted" collection is Session.deleted.
>>
>> If you have an object marked for deletion, you can re-add it,
>> session.add(object), will remove it from the deleted list.
>>
>> If there's something you need the Session to do that there is not a public
>> API for, ask me on the list here first, as I'd rather ensure there are
>> maintainable paths for use cases.
>>
>>
>>
>>
>> >
>> > Any feedback would be appreciated.
>> >
>> > Thank you,
>> > Andrei Chirila
>> >
>> > from sqlalchemy.orm.query import Query
>> > import sqlalchemy.event as event
>> >
>> >
>> > class DeletedAtQuery(Query):
>> >""" Query adding a default condition that,
>> >if deleted

[sqlalchemy] Bulk creation of columns

2011-09-21 Thread pravin battula
Hi, 

How can i create columns in bulk using create_column  method? 
I tried as below,. 

migrate_engine = create_engine('mysql://root:root@localhost/ 
payroll', echo=False) 
metadata = MetaData(bind = migrate_engine) 
metadata.reflect(bind = migrate_engine, schema = 'payroll') 

tableObj = metadata.tables.get('test.salary') 

colList = 
[Column('description',String(100)),Column('information',String(50))] 

tableObj.append_column(*colList) 
tableObj.create_column(*colList) 

getting an error as *TypeError*:*create() got multiple values for keyword 
argument 'table'*

Please do the needful. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/l6P5vE-GBUwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Bulk creation of columns

2011-09-21 Thread pravin battula
i'm getting the below error.
'list' object has no attribute '_set_parent_with_dispatch'

On Sep 21, 3:30 pm, pravin battula  wrote:
> Hi,
>
> How can i create columns in bulk?
> I tried as below but doesn't work.
>
>     migrate_engine = create_engine('mysql://root:root@localhost/
> payroll', echo=False)
>     metadata = MetaData(bind = migrate_engine)
>     metadata.reflect(bind = migrate_engine, schema = 'payroll')
>
>     tableObj = metadata.tables.get('test.salary')
>
>     colList =
> [Column('description',String(100)),Column('information',String(50))]
>
>     tableObj.append_column(colList)
>     tableObj.create_column(colList)
>
> Please do the needful.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] object_session returns None, but state still has session_id

2011-09-21 Thread Ben Ford
Hi Folks,

I have the following code:

obj = 
ses = object_session(obj)
if not ses:
   ses = Session()
   ses.add(obj)
ses.commit()

This is giving me InvalidRequestError: saying that the object is attached to 
a session.

When looking into this I have found:

   - obj._sa_instance_state.session_id is not 
   in sqlalchemy.orm.session._sessions.
   - object_session(parent) also returns None
   - The session_id given in the InvalidRequestError (i.e  in 
   this part of the message is already attached to session '') == 
   parent._sa_instance_state.session_id

I'm not that familiar with SA's code, but wondered if it would be possible 
to check for a Session not being valid? Perhaps when the KeyError is caught 
in the sqlalchemy.orm.session._state_session function (line 1722 in my 
version) the state session_id could be set to None - or any other action 
taken that would make the state look like it didn't have a Session.

As for a fix, I guess the best thing to do would be to make_transient(obj) 
or is there a better way?

Thanks,
Ben

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/bGDqyB64kZYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Bulk creation of columns

2011-09-21 Thread pravin battula
Hi,

How can i create columns in bulk?
I tried as below but doesn't work.

migrate_engine = create_engine('mysql://root:root@localhost/
payroll', echo=False)
metadata = MetaData(bind = migrate_engine)
metadata.reflect(bind = migrate_engine, schema = 'payroll')

tableObj = metadata.tables.get('test.salary')

colList =
[Column('description',String(100)),Column('information',String(50))]

tableObj.append_column(colList)
tableObj.create_column(colList)

Please do the needful.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.