[sqlalchemy] ObjectDeletedError: Instance 'xxx' has been deleted

2012-11-16 Thread sajuptpm

The code bellow throwing error ObjectDeletedError: Instance 'xxx' has been 
deleted. when an exception throwing from method1.

How fix this issue.


def main_method():
DBSession.add(db_obj1)
DBSession.fush()

for x in lst:
method1(db_obj1.id)


def method1(id):
try:
s1 = DBSession()
s1.begin_nested()

db_obj2 = create_new_obj(id)
DBSession.add(db_obj1)
DBSession.fush()

if some-codition:
raise Exception(Failedd)

s1.commit()
except Exception, ex:
s1.rollback()
raise ex

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/VGUXlX9cevoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: ObjectDeletedError: Instance 'xxx' has been deleted

2012-11-16 Thread sajuptpm


The code bellow throwing error ObjectDeletedError: Instance 'xxx' has been 
deleted. when a exception throwing from method1.
Eroor in line method1(db_obj1.id),  db_obj1.id failing.

How fix this issue.


def main_method():
DBSession.add(db_obj1)
DBSession.fush()

for x in lst:
try:
method1(db_obj1.id)
excpt Exception, ex:
pass


def method1(id):
try:
s1 = DBSession()
s1.begin_nested()

db_obj2 = create_new_obj(id)
DBSession.add(db_obj1)
DBSession.fush()

if some-codition:
raise Exception(Failedd)

s1.commit()
except Exception, ex:
s1.rollback()
raise ex

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/ya-rKw3XinwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Zope-SQLAlchemy extension

2012-11-16 Thread Clemens Herschel, III
 In a pyramid application:
In models.py: DBSession= 
scoped_session(sessionmaker(extensions=ZopeTransactionExtension()))
 In view:   dbsession = DBSession
   def add(request):
#get implant1
   dbsession.add(implant)
   transaction.commit()
 dbsession = DBsession
This works fine on first add. On second call to add, the first implant 
object is updated rather than an object added as I would expect from the 
sqlalchemy session docs .  Using a new session after the commit is 
suggested in the zope-sqlalchemy docs to achieve this. The user in this 
application might do repeated adds into many tables  in one request .
So because I  want to use more than one session per request, I should not 
use Zope-SQLAlchemy extension but SQLAlchemy  ScopedSession helper class. 
Is that correct? Please excuse any misuse of terminology. Thanks for 
pointing me in the right direction.
Using SQLAlchemy0.7.3
zope-sqlalchemy0.6.1

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/hKTt7MhHjEsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sqlalchemy.exc.DBAPIError: (Error) ('HY003', '[HY003] [FreeTDS][SQL Server]Program type out of range (0) (SQLGetData)') None None

2012-11-16 Thread Lukasz Szybalski
Hello,
Any idea what this error message means.

I'm trying to execute this:
s=session.execute(assp_Checks @begin_date=:start,
@end_date=:end,@company_id=1,params={'start':date_to_process,'end':date_to_process}).fetchall()

I get:

Traceback (most recent call last):
  File stdin, line 1, in module
  File stdin, line 6, in process_date
  File
/home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
line 3210, in fetchall
self.cursor, self.context)
  File
/home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
line 3204, in fetchall
l = self.process_rows(self._fetchall_impl())
  File
/home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
line 3171, in _fetchall_impl
return self.cursor.fetchall()
sqlalchemy.exc.DBAPIError: (Error) ('HY003', '[HY003] [FreeTDS][SQL
Server]Program type out of range (0) (SQLGetData)') None None


If I use isql I have no problems.

exec assp_Checks @begin_date='11/15/2012',
@end_date='11/15/2012',@company_id=1

I get 10 records back.

Any ideas?

Thanks,
Lucas

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] _DBProxy.dispose() doesn't actually dispose of the pool/connections?

2012-11-16 Thread Carl Meyer
Thanks Michael.

On 11/15/2012 08:41 PM, Michael Bayer wrote:
 that's possible, it's a pretty old, little used, API, and that
 document probably assumes GC will pick up on it, and it would be
 better if dispose() was called on the pool all the way through.   But
 the garbage collector should be reclaiming the pool and those
 connections (all DBAPIs I'm aware of release TCP/IP connections when
 the connection is garbage collected).  Even calling gc.collect(),
 you're not seeing anything clear out ?

gc.collect() does close the connection iff the closed connection object
(well, ConnectionFairy actually in this case) has also gone out of scope
or been deleted; but not otherwise. So apparently the ConnectionFairy
(even after its closed) still holds references that keep the pool and
its connections alive.

 It seems that perhaps _DBProxy is relying on garbage collection to 
 somehow clean everything up once it lets go of its reference to
 the pool, but even on CPython with refcounting GC this does not
 seem to work;
 
 CPython clears unreachable cycles periodically as well.  calling
 gc.collect() will force this process to occur.

Yeah, when I mentioned refcounting I wasn't even thinking of cycle
collection, I just meant that CPython is the best-case for reliance on
GC, compared to eg pypy, so it's worth having a deterministic way to
make sure all the connections are closed.

 your best bet for now is to just iterate through db.proxy.pools and
 just call dispose() on them.
 
 http://www.sqlalchemy.org/trac/ticket/2609 is added.

Thanks!

Carl

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Is it possible to define multiple model objects that refer to the same table?

2012-11-16 Thread Rob Crowell
On Nov 15, 10:48 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Nov 15, 2012, at 7:33 PM, Rob Crowell wrote:









  Sorry, that got cut off at the end.

  class IssueTag(Base):
      __tablename__ = 'issue_user_tag'

  sqlalchemy.exc.InvalidRequestError: Table 'issue_user_tag' is already 
  defined for this MetaData instance.  Specify 'extend_existing=True' to 
  redefine options and columns on an existing Table object.

  On Thursday, November 15, 2012 7:32:29 PM UTC-5, Rob Crowell wrote:
  I'm working with a denormalized cache schema, and I've run into a situation 
  where it would be helpful to be able to create multiple classes that extend 
  Base but refer to the same __tablename__.  Is this possible to do?  I am 
  getting this Exception:
      sqlalchemy.exc.InvalidRequestError: Table '[tablename]' is already 
  defined for this MetaData instance.  Specify 'extend_existing=True' to 
  redefine options and columns on an existing Table object.

  For a little more insight, we have some attributes that always have exactly 
  one value (user who created the issue), and other attributes that can have 
  1 or more values (user-defined tags for the issue).  If we were being 
  exhaustive, we would create two cached tables for our issues since 
  sometimes we want to display recent issues sometimes by user and sometimes 
  by tag:
      * issue_user
      * issue_tag

  However, we can get away with writing just one table and querying it with 
  an appropriate group_by(user_id) to achieve the same end as having 2 
  tables.  Since my application should behave as if there were 2 separate 
  cache tables (and I'd like to keep open the option of adding two separate 
  cache tables in the future), I would like to have 2 different Base classes 
  representing the two ways in which we would query the table.  The obvious 
  way of doing this doesn't work:

  class IssueUser(Base):
      __tablename__ = 'issue_user_tag'

  class IssueTag(Base):

  --

 two^H^H^H three ways:

 1. map to a Table:

 mytable = Table(mytable, Base.metadata, Column(...))

 class A(Base):
     __table__ = mytable

 class B(Base):
     __table__ = mytable

 1a: variant of 1, map A as you did but use __table__ on B

 class A(Base):
     __tablename__ = 'mytable'

     x = Column(...)

 class B(Base):
     __table__ = A.__table__

 2. use single table inheritance with no discriminator

 class MyTable(Base):
     __tablename__ = 'mytable'

 class A(MyTable):
    # 

 class B(MyTable):
    # ...

 I don't have an understanding of your querying situation yet, discriminating 
 on group_by() seems a little strange as group_by() is only intended to be 
 used to group for aggregates, but #1, #1a or #2 should fit the bill.


Thanks for the help so far Michael!  I can explain a little more about
what I'm trying to do (I'm using a fictional application here but I
think it pretty accurately translates into my actual application).

BACKGROUND
--

Let's say I'm writing an issue tracking application.  Each issue that
we're tracking has a type (an issue must have exactly one type), and
each issue may have an unlimited number of user-provided labels.

Each day, people browse our issue tracker and each time they do they
generate a page view on the issue.  Here's an example of one day's
worth of data:

IssueA: Views: 1, Type: Bug, Labels: [login-page (id=1),
show-stopper (id=2)]
IssueB: Views: 20, Type: One-Time Task, Labels: [disk-full
(id=3), show-stopper (id=2)]
IssueC: Views: 300, Type: Bug, Labels: [login-page (id=1)]

The BigCo. I'm working for is very interested in knowing which issues
are read by the most people, and they need the ability to generate
reports sliced by arbitrary date ranges.  However, we can tolerate a
day delay, so we are writing summary tables each night.  Two of these
summary tables are aggregated by either issue type or label, and we
also write a third table that can be used to drill-down and see page
visits bucketed by both type and label:

CREATE TABLE `issue_type` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

CREATE TABLE `issue_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

CREATE TABLE `issue_type_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_visits` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
)

So we'd run these insert statements at midnight:

INSERT INTO issue_type (created, type, num_visits) VALUES
(2012-11-15, Bug, 301),
(2012-11-15, One-Time Task, 20);

INSERT INTO issue_labels (created, label_id, 

Re: [sqlalchemy] Re: Is it possible to define multiple model objects that refer to the same table?

2012-11-16 Thread Michael Bayer

On Nov 16, 2012, at 2:59 PM, Rob Crowell wrote:

 
 Thanks for the help so far Michael!  I can explain a little more about
 what I'm trying to do (I'm using a fictional application here but I
 think it pretty accurately translates into my actual application).
 
 BACKGROUND
 --
 
 Let's say I'm writing an issue tracking application.  Each issue that
 we're tracking has a type (an issue must have exactly one type), and
 each issue may have an unlimited number of user-provided labels.
 
 Each day, people browse our issue tracker and each time they do they
 generate a page view on the issue.  Here's an example of one day's
 worth of data:
 
IssueA: Views: 1, Type: Bug, Labels: [login-page (id=1),
 show-stopper (id=2)]
IssueB: Views: 20, Type: One-Time Task, Labels: [disk-full
 (id=3), show-stopper (id=2)]
IssueC: Views: 300, Type: Bug, Labels: [login-page (id=1)]
 
 The BigCo. I'm working for is very interested in knowing which issues
 are read by the most people, and they need the ability to generate
 reports sliced by arbitrary date ranges.  However, we can tolerate a
 day delay, so we are writing summary tables each night.  Two of these
 summary tables are aggregated by either issue type or label, and we
 also write a third table that can be used to drill-down and see page
 visits bucketed by both type and label:
 
 CREATE TABLE `issue_type` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
 )
 
 CREATE TABLE `issue_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_watchers` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
 )
 
 CREATE TABLE `issue_type_label` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `created` datetime NOT NULL,
  `type` enum('Bug','Feature','Improvement','One-Time Task') NOT NULL,
  `label_id` int(10) unsigned NOT NULL,
  `num_visits` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`)
 )
 
 So we'd run these insert statements at midnight:
 
 INSERT INTO issue_type (created, type, num_visits) VALUES
(2012-11-15, Bug, 301),
(2012-11-15, One-Time Task, 20);
 
 INSERT INTO issue_labels (created, label_id, num_visits) VALUES
(2012-11-15, 1, 301),
(2012-11-15, 2, 21),
(2012-11-15, 3, 20);
 
 INSERT INTO issue_type_label (created, type, label_id, num_visits)
 VALUES
(2012-11-15, Bug, 1, 301),
(2012-11-15, Bug, 2, 1),
(2012-11-15, One-Time Task, 3, 20),
(2012-11-15, One-Time Task, 2, 20);
 
 Now when we want to generate the summary reports, we query one of the
 first two tables (if we're generating a report aggregated by issue
 type we hit issue_type, if we're generating a report aggregated by
 label we hit issue_label), and when we want to run a drill-down query
 both both type and label, we hit issue_type_label:
 
# broken down by type
SELECT type, SUM(num_visits) FROM issue_type WHERE created =
 2012-11-01 AND created = 2012-11-16 GROUP BY type;
 
# broken down by label
SELECT label_id, SUM(num_visits) FROM issue_label WHERE created =
 2012-11-01 AND created = 2012-11-16 GROUP BY label_id;
 
# broken down by both type and label
SELECT type, label_id, SUM(num_visits) FROM issue_type_label WHERE
 created = 2012-11-01 AND created = 2012-11-16 GROUP BY type,
 label_id;
 
 This works fine and is very fast for our application.  However, it's a
 bit redundant since each issue has exactly one issue type, so we can
 safely ignore it when generating the label summary report.  So really,
 we can get away with not having the issue_label table at all, and just
 run this instead (we must continue to have the issue_type table
 however, since a single issue can have many labels or none at all):
 
# broken down by label
SELECT label_id, SUM(num_visits) FROM issue_type_label WHERE
 created = 2012-11-01 AND created = 2012-11-16 GROUP BY label_id;
 
 
 THE ISSUE
 -
 
 In reality, we have many types of fields that we generate reports on
 (not just type and label), so we have more than a dozen of these
 aggregate tables rolled up by different sets of fields.  While porting
 our data access layer to SQLAlchemy (we have a separate process to
 write these tables which we aren't going to move to SQLAlchemy), I've
 created a bunch of functions that use reflection on the various Model
 objects I've built to figure out what columns we need to select.
 
 Each column that we don't want to GROUP BY or filter out with a WHERE
 clause gets wrapped by func.sum() and passed to a call to
 session.query().  My columns are very consistently named so I have
 automated almost all of this using object reflection, which lets my
 code populate which columns to select, group by, and sum on
 automatically.  I really don't want to write a custom call to query()
 for each table type that 

Re: [sqlalchemy] sqlalchemy.exc.DBAPIError: (Error) ('HY003', '[HY003] [FreeTDS][SQL Server]Program type out of range (0) (SQLGetData)') None None

2012-11-16 Thread Michael Bayer

On Nov 16, 2012, at 11:44 AM, Lukasz Szybalski wrote:

 Hello,
 Any idea what this error message means. 
 
 I'm trying to execute this:
 s=session.execute(assp_Checks @begin_date=:start, 
 @end_date=:end,@company_id=1,params={'start':date_to_process,'end':date_to_process}).fetchall()
 
 I get: 
 
 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 6, in process_date
   File 
 /home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
  line 3210, in fetchall
 self.cursor, self.context)
   File 
 /home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
  line 3204, in fetchall
 l = self.process_rows(self._fetchall_impl())
   File 
 /home/unique/checkexport/env26/lib/python2.6/site-packages/SQLAlchemy-0.7.9-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
  line 3171, in _fetchall_impl
 return self.cursor.fetchall()
 sqlalchemy.exc.DBAPIError: (Error) ('HY003', '[HY003] [FreeTDS][SQL 
 Server]Program type out of range (0) (SQLGetData)') None None
  
 
 If I use isql I have no problems.
 
 exec assp_Checks @begin_date='11/15/2012', 
 @end_date='11/15/2012',@company_id=1
 
 I get 10 records back.

I don't have a solution for this but it is certainly due to the reduced 
featureset present in ODBC, particularly that of FreeTDS.You'd be looking 
here to find a path to getting this to work with pyodbc directly, and perhaps 
you'd need to email on the freetds list and/or change your approach.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: ObjectDeletedError: Instance 'xxx' has been deleted

2012-11-16 Thread sajuptpm
H, Michael Bayer
Thanks

You are correct, the rollback in method1 rollbacking transaction in 
main_method.
I want to isolate transaction in main_method from rollback in method1.

I attached more code.


from sqlalchemy.orm import scoped_session, sessionmaker
maker = sessionmaker(autoflush=True, 
autocommit=False,expire_on_commit=False,
 extension=ZopeTransactionExtension())
zopelessmaker = sessionmaker(autoflush=True, \
 autocommit=False, \
 expire_on_commit=False)
DBSession = scoped_session(maker)



def main_method():
db_obj1 = DBModelclass1(Hello)
DBSession.add(db_obj1)
DBSession.fush()

for x in lst:
try:
method1(db_obj1.id)
excpt Exception, ex:
pass



def method1(id):
try:

s1 = DBSession()
s1.begin_nested()
db_obj2 = DBModelclass2(Test)
db_obj2.refname = name_%s %(id)
DBSession.add(db_obj2)
DBSession.fush()

if some-codition:
raise Exception(Failedd)

s1.commit()
except Exception, ex:
s1.rollback()
raise ex 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/uIJwy6KOAdsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.