[sqlalchemy] self.data[i] index out of range

2009-11-07 Thread jo

Hia all,

I'm dogged by this error for months, 
could someone, please, explain me what it means and how to avoid it.


  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/attributes.py, line 
532, in __getitem__
return self.data[i]
IndexError: list index out of range

thanks for any help.

j


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] no such table OperationalError despite CREATE TABLE being issued

2009-11-07 Thread James

Hi, this concerns running functional tests in TurboGears2, using SA
0.5.1.

As part of the functional test set up, all the model's tables are
CREATEd, and DROPped as part of the tear down.

However, despite seeing the expected sequence of CREATE, 1st test,
DROP, CREATE, 2nd test, DROP, the second test fails with no such
table errors.

Condensed INFO level logging of sqlalchemy.engine:
...
INFO PRAGMA table_info(tg_user)
INFO ()
CREATE TABLE tg_user (
user_id INTEGER NOT NULL,
user_name VARCHAR(16) NOT NULL,
email_address VARCHAR(255) NOT NULL,
display_name VARCHAR(255),
password VARCHAR(80),
created TIMESTAMP,
PRIMARY KEY (user_id),
 UNIQUE (user_name),
 UNIQUE (email_address)
)
INFO ()
INFO COMMIT
...
INFO BEGIN
[DB interactions for first test]
INFO COMMIT
...
INFO PRAGMA table_info(tg_user)
INFO ()
...
DROP TABLE tg_user
INFO ()
INFO COMMIT
...
INFO PRAGMA table_info(tg_user)
INFO ()
...
CREATE TABLE tg_user (
user_id INTEGER NOT NULL,
user_name VARCHAR(16) NOT NULL,
email_address VARCHAR(255) NOT NULL,
display_name VARCHAR(255),
password VARCHAR(80),
created TIMESTAMP,
PRIMARY KEY (user_id),
 UNIQUE (user_name),
 UNIQUE (email_address)
)
INFO ()
INFO COMMIT
...

Result in this stack trace when trying to interact with the tg_user
table during the second test:
Traceback (most recent call last):
  ...
  File /Users/james/virtual/unit_tests/UnitTests/unit_tests/
websetup.py, line 54, in setup_app
model.DBSession.flush()
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/scoping.py, line 121, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/session.py, line 1347, in
flush
self._flush(objects)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/session.py, line 1417, in
_flush
flush_context.execute()
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 243, in
execute
UOWExecutor().execute(self, tasks)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 706, in
execute
self.execute_save_steps(trans, task)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 721, in
execute_save_steps
self.save_objects(trans, task)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/unitofwork.py, line 712, in
save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/orm/mapper.py, line 1346, in
_save_obj
c = connection.execute(statement.values(value_params), params)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 824, in
execute
return Connection.executors[c](self, object, multiparams, params)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 874, in
_execute_clauseelement
return self.__execute_context(context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 896, in
__execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 950, in
_cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File /Users/james/virtual/unit_tests/lib/python2.5/site-packages/
SQLAlchemy-0.5.1-py2.5.egg/sqlalchemy/engine/base.py, line 931, in
_handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
OperationalError: (OperationalError) no such table: tg_user u'INSERT
INTO tg_user (user_name, email_address, display_name, password,
created) VALUES (?, ?, ?, ?, ?)' [u'manager',
u'mana...@somedomain.com', u'Example manager',
u'276e4c1a24e5c8005f71dc8a2a86912347355f4dae87891e37ccea9c5fdc2753c49549168c8d558e',
'2009-11-07 10:51:40.039211']


Can anyone see why a new created table wouldn't be found by
SQLAlchemy? What more information could I give that would be useful?

This is using sqlalchemy.url = sqlite:///:memory:

Thanks!
James
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 

[sqlalchemy] Re: no such table OperationalError despite CREATE TABLE being issued

2009-11-07 Thread Michael Bayer


On Nov 7, 2009, at 6:06 AM, James wrote:


 Can anyone see why a new created table wouldn't be found by
 SQLAlchemy? What more information could I give that would be useful?

 This is using sqlalchemy.url = sqlite:///:memory:

the most obvious cause would be that two different engines are being  
used, since sqlite memory databases are local only to a single  
connection.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: self.data[i] index out of range

2009-11-07 Thread Michael Bayer


On Nov 7, 2009, at 3:17 AM, jo wrote:


 Hia all,

 I'm dogged by this error for months,
 could someone, please, explain me what it means and how to avoid it.


  File /usr/lib/python2.4/site-packages/sqlalchemy/orm/ 
 attributes.py, line 532, in __getitem__
return self.data[i]
 IndexError: list index out of range

 thanks for any help.

upgrade to a recent version of SQLA at the very least since I can't  
even find such a function in the attributes.py package.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: executemany + postgresql

2009-11-07 Thread Jon Nelson

On Fri, Nov 6, 2009 at 9:57 AM, Michael Bayer mike...@zzzcomputing.com wrote:
 Before I even posted I resorted to strace. strace immediately
 confirmed my suspicion - when using psycopg2 I don't see one big fat
 INSERT with lots of binds, I see one INSERT per bind, and it's this
 that is ultimately killing the performance. You can easily observe
 this via strace: as I'm sure you know, the communication between the
 test program and postgresql takes place across a socket (unix domain
 or tcp/ip). For every single set of bind params, the result is
 essentially one sendto (INSERT INTO ) and rt_sigprocmask, a poll,
 and then a recvfrom and rt_sigprocmask pair.  Profiling at the C level
 shows that sendto accounts for *35%* of the total runtime and recvfrom
 a healthy 15%. It's this enormous overhead for every single bind param
 that's killing the performance.

 have you asked about this on the psycopg2 mailing list ?   its at 
 http://mail.python.org/mailman/listinfo/python-list
  .   Let me know if you do, because I'll get out the popcorn... :)

That's the python list.
Anyway, I did some more testing. executemany performance is not any
better than looping over execute, because that's all that executemany
appears to do in any case.

However, I manually built a bit fat set of bind params (bypassing
sqlalchemy directly) and got a SUBSTANTIAL performance improvement.
Postgresql as of 8.2 supports /sets/ of bind params, it'd be nice if
pg8000 or psycopg2 (or both) supported that. Building 25000 bind
params by hand is not fun, but it got me to just shy of 50K
inserts/second.

 We also support the pg8000 DBAPI in 0.6.  I doubt its doing something
 differently here but feel free to connect with postgresql+pg8000://
 and see what you get.

I tried pg8000 but I got an error:

...

return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.DBAPIError: (TypeError) connect() takes at least 1
non-keyword argument (0 given) None None



-- 
Jon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: executemany + postgresql

2009-11-07 Thread Michael Bayer

On Nov 7, 2009, at 12:53 PM, Jon Nelson wrote:


 have you asked about this on the psycopg2 mailing list ?   its at 
 http://mail.python.org/mailman/listinfo/python-list
  .   Let me know if you do, because I'll get out the popcorn... :)

 That's the python list.

oops:

http://lists.initd.org/mailman/listinfo/psycopg



 I tried pg8000 but I got an error:

 ...

return self.dbapi.connect(*cargs, **cparams)
 sqlalchemy.exc.DBAPIError: (TypeError) connect() takes at least 1
 non-keyword argument (0 given) None None

i can't reproduce that.   this is with the latest trunk:

from sqlalchemy import *

e = create_engine('postgresql+pg8000://scott:ti...@localhost/test')

print e.execute(select 1).fetchall()

produces:

[(1,)]



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: executemany + postgresql

2009-11-07 Thread Jon Nelson

On Sat, Nov 7, 2009 at 11:58 AM, Michael Bayer mike...@zzzcomputing.com wrote:

 On Nov 7, 2009, at 12:53 PM, Jon Nelson wrote:

 have you asked about this on the psycopg2 mailing list ?   its at
 http://mail.python.org/mailman/listinfo/python-list

  .   Let me know if you do, because I'll get out the popcorn... :)

 That's the python list.

 oops:
 http://lists.initd.org/mailman/listinfo/psycopg


 I tried pg8000 but I got an error:

 ...

    return self.dbapi.connect(*cargs, **cparams)
 sqlalchemy.exc.DBAPIError: (TypeError) connect() takes at least 1
 non-keyword argument (0 given) None None

 i can't reproduce that.   this is with the latest trunk:
 from sqlalchemy import *
 e = create_engine('postgresql+pg8000://scott:ti...@localhost/test')
 print e.execute(select 1).fetchall()
 produces:
 [(1,)]

Apparently, pg8000 requires host, user and pass (or at least one of those).

Of course, then when I am connected, I get a traceback:

...
metadata.drop_all()
  File /usr/lib64/python2.6/site-packages/sqlalchemy/schema.py, line
1871, in drop_all
bind.drop(self, checkfirst=checkfirst, tables=tables)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1336, in drop
self._run_visitor(ddl.SchemaDropper, entity,
connection=connection, **kwargs)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1360, in _run_visitor
visitorcallable(self.dialect, conn, **kwargs).traverse(element)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/sql/visitors.py,
line 86, in traverse
return traverse(obj, self.__traverse_options__, self._visitor_dict)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/sql/visitors.py,
line 197, in traverse
return traverse_using(iterate(obj, opts), obj, visitors)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/sql/visitors.py,
line 191, in traverse_using
meth(target)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/ddl.py,
line 89, in visit_metadata
collection = [t for t in reversed(sql_util.sort_tables(tables)) if
self._can_drop(t)]
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/ddl.py,
line 104, in _can_drop
return not self.checkfirst or
self.dialect.has_table(self.connection, table.name,
schema=table.schema)
  File 
/usr/lib64/python2.6/site-packages/sqlalchemy/dialects/postgresql/base.py,
line 611, in has_table
type_=sqltypes.Unicode)]
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 991, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1053, in _execute_clauseelement
return self.__execute_context(context)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1076, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py,
line 1136, in _cursor_execute
self.dialect.do_execute(cursor, statement, parameters, context=context)
  File /usr/lib64/python2.6/site-packages/sqlalchemy/engine/default.py,
line 207, in do_execute
cursor.execute(statement, parameters)
  File pg8000/dbapi.py, line 243, in _fn
return fn(self, *args, **kwargs)
  File pg8000/dbapi.py, line 312, in execute
self._execute(operation, args)
  File pg8000/dbapi.py, line 317, in _execute
self.cursor.execute(new_query, *new_args)
  File pg8000/interface.py, line 303, in execute
self._stmt = PreparedStatement(self.connection, query,
statement_name=, *[{type: type(x), value: x} for x in args])
  File pg8000/interface.py, line 108, in __init__
self._parse_row_desc = self.c.parse(self._statement_name, statement, types)
  File pg8000/protocol.py, line 918, in _fn
return fn(self, *args, **kwargs)
  File pg8000/protocol.py, line 1069, in parse
self._send(Parse(statement, qs, param_types))
  File pg8000/protocol.py, line 975, in _send
data = msg.serialize()
  File pg8000/protocol.py, line 121, in serialize
val = struct.pack(!i, len(val) + 4) + val
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8d in position
3: ordinal not in range(128)



-- 
Jon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: executemany + postgresql

2009-11-07 Thread Michael Bayer


On Nov 7, 2009, at 1:30 PM, Jon Nelson wrote:

  File pg8000/protocol.py, line 121, in serialize
val = struct.pack(!i, len(val) + 4) + val
 UnicodeDecodeError: 'ascii' codec can't decode byte 0x8d in position
 3: ordinal not in range(128)

make sure you're on the latest tip of pg8000, which these days seems  
to be at http://github.com/mfenniak/pg8000/tree/trunk .  It also  
adheres to the client encoding of your PG database, which you should  
make sure is on utf-8.

But its not going to render an INSERT...VALUES with multiple  
parameters in one big string, so if that's your goal you need to  
generate that string yourself.I'm surprised that sqlite, per your  
observation, parses an INSERT statement and re-renders it with  
multiple VALUES clauses ?very surprising behavior.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: executemany + postgresql

2009-11-07 Thread Jon Nelson

On Sat, Nov 7, 2009 at 3:02 PM, Michael Bayer mike...@zzzcomputing.com wrote:


 On Nov 7, 2009, at 1:30 PM, Jon Nelson wrote:

  File pg8000/protocol.py, line 121, in serialize
    val = struct.pack(!i, len(val) + 4) + val
 UnicodeDecodeError: 'ascii' codec can't decode byte 0x8d in position
 3: ordinal not in range(128)

 make sure you're on the latest tip of pg8000, which these days seems
 to be at http://github.com/mfenniak/pg8000/tree/trunk .  It also
 adheres to the client encoding of your PG database, which you should
 make sure is on utf-8.

Ah. I was running the latest /released/ version - I generally avoid
running 'tip/HEAD/whatever' except during testing. Since I don't
expect pg8000 to have any substantially different behavior, it's
probably not even worth the effort.

snip/

 I'm surprised that sqlite, per your
 observation, parses an INSERT statement and re-renders it with
 multiple VALUES clauses ?    very surprising behavior.

I'm not sure I said that - I certainly didn't intend that.

Ultimately, the IPC costs associated with each set of bind params (one
per row) just murders psycopg2 when compared to sqlite. There isn't
any sqlite RPC per-se, since it's always local.  I can only assume
that sqlite defers locking the database until the start of a
transaction, and since sqlite isn't multi-writer aware the overhead of
doing so is minimal.

I wasn't comparing sqlite and postgresql per se - there isn't much of
a comparison in my mind once you start needing all of the features,
stability, and power that postgresql brings. However, I was
disappointed to see that psycopg2 is not making use of the (postgresql
8.2 and newer) multi-bind param INSERT stuff, as this ultimately
reduces the IPC overhead to a very small amount.

The cost of a single call to postgresql might be small, but when you
multiply it by hundreds of thousands or millions it suddenly becomes a
deciding factor in some situations.

-- 
Jon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Query caching and garbage collection of related instances

2009-11-07 Thread Adam Dziendziel

Hi,

I am trying to use the query caching solution described here:
http://svn.sqlalchemy.org/sqlalchemy/trunk/examples/query_caching/per_session.py

In most cases it works, the returned records are cached, I store them
in a LRU cache modeled after http://code.activestate.com/recipes/498245/

However, when I run a long running operation, which operates on
hundreds of other records, apparently the garbage collection is run on
the session's weak-referencing identity map. The cache keeps the
returned records, but other eagerly loaded related instances of the
returned records are lost. The ORM issues queries to load them again
from the database. I understand that there are no strong references
between an instance and other related instances.

What is the best solution to keep related instances in a session?

If I create a session with weak_identity_map=False, then during my
long running operation I will run out of memory, unless I expunge
unused records, however, it is easy to miss one record and the
identity map will be growing anyway.

Is there possible to get a list of referenced instances of another
instance, so that I could store the list together with the instance in
the MRU cache? Or to make a session with a strong-referencing map and
LRU policy that keeps it below a given size?


Regards,
Adam
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---