[sqlalchemy] NestedSet listeners

2013-02-21 Thread Grzegorz Śliwiński
Some time ago I created a NestedSet listeners for sqlalchemy.

They utilise left/right reaches along with depth and parent_id fields. It 
works almost as expected, however I still have some issues especially in 
the updating, and can't seem to pinpoint the source, where I'm getting 
different than expected results.

The source for listeners can be found here:

https://gist.github.com/fizyk/4757230


It contains also test I created so far (but all of them passes).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] NestedSet listeners

2013-02-21 Thread Michael Bayer
nested sets is pretty tough with the ORM as it requires a mode of operation 
that resists the ORM's natural state of batching.  

Have you looked at an existing package like 
https://pypi.python.org/pypi/SQLAlchemy-ORM-tree/ ?


On Feb 21, 2013, at 10:19 AM, Grzegorz Śliwiński fi...@fizyk.net.pl wrote:

 Some time ago I created a NestedSet listeners for sqlalchemy.
 
 They utilise left/right reaches along with depth and parent_id fields. It 
 works almost as expected, however I still have some issues especially in the 
 updating, and can't seem to pinpoint the source, where I'm getting different 
 than expected results.
 
 The source for listeners can be found here:
 
 https://gist.github.com/fizyk/4757230
 
 
 It contains also test I created so far (but all of them passes).
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] NestedSet listeners

2013-02-21 Thread Grzegorz Śliwiński
Hmm.. I had rather narrow search spectrum, as I search rather for nested 
set term rather...
Although it's based mostly on Mapper and Session extensions, which are 
being deprecated.

I'll have a look at the codebase though, maybe I'll get some idea what I 
made wrong

W dniu czwartek, 21 lutego 2013 19:25:16 UTC+1 użytkownik Michael Bayer 
napisał:

 nested sets is pretty tough with the ORM as it requires a mode of 
 operation that resists the ORM's natural state of batching.  

 Have you looked at an existing package like 
 https://pypi.python.org/pypi/SQLAlchemy-ORM-tree/ ?


 On Feb 21, 2013, at 10:19 AM, Grzegorz Śliwiński 
 fi...@fizyk.net.pljavascript: 
 wrote:

 Some time ago I created a NestedSet listeners for sqlalchemy.

 They utilise left/right reaches along with depth and parent_id fields. It 
 works almost as expected, however I still have some issues especially in 
 the updating, and can't seem to pinpoint the source, where I'm getting 
 different than expected results.

 The source for listeners can be found here:

 https://gist.github.com/fizyk/4757230


 It contains also test I created so far (but all of them passes).

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] NestedSet listeners

2013-02-21 Thread Michael Bayer
the mapper/session extensions can be swapped for event listeners almost 
directly.


On Feb 21, 2013, at 3:04 PM, Grzegorz Śliwiński fi...@fizyk.net.pl wrote:

 Hmm.. I had rather narrow search spectrum, as I search rather for nested 
 set term rather...
 Although it's based mostly on Mapper and Session extensions, which are being 
 deprecated.
 
 I'll have a look at the codebase though, maybe I'll get some idea what I made 
 wrong
 
 W dniu czwartek, 21 lutego 2013 19:25:16 UTC+1 użytkownik Michael Bayer 
 napisał:
 nested sets is pretty tough with the ORM as it requires a mode of operation 
 that resists the ORM's natural state of batching.  
 
 Have you looked at an existing package like 
 https://pypi.python.org/pypi/SQLAlchemy-ORM-tree/ ?
 
 
 On Feb 21, 2013, at 10:19 AM, Grzegorz Śliwiński fi...@fizyk.net.pl wrote:
 
 Some time ago I created a NestedSet listeners for sqlalchemy.
 
 They utilise left/right reaches along with depth and parent_id fields. It 
 works almost as expected, however I still have some issues especially in the 
 updating, and can't seem to pinpoint the source, where I'm getting different 
 than expected results.
 
 The source for listeners can be found here:
 
 https://gist.github.com/fizyk/4757230
 
 
 It contains also test I created so far (but all of them passes).
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com.
 To post to this group, send email to sqlal...@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread Victor Ng
I do a lot of processing on large amount of data. 

The common pattern we follow is: 

1. Iterate through a large data set 
2. Do some sort of processing (i.e. NLP processing like tokenization, 
capitalization, regex parsing, ... ) 
3. Insert the new result in another table. 

Right now we are doing something like this: 

for x in session.query(Foo).yield_per(1): 
  bar = Bar()
  bar.hello = x.world.lower()
  session.add(bar)
  session.flush()
session.commit()

This works, not great though. Typically, we will have to wait 30mins - 1hr 
to see `bar`s being committed. 

My question is: Is there a way that we can commit as we are iterating 
without breaking yield_per? 

If not, what is the recommended way of doing this? 

**NOTE: There's a lot of answers on Stackoverflow that involves writing 
custom pagination functions for session.query. Their efficiency and 
effectiveness has not been benchmarked. 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread A.M.
On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng vicng...@gmail.com
wrote:
 I do a lot of processing on large amount of data. 
 
 The common pattern we follow is: 
 
 1. Iterate through a large data set 
 2. Do some sort of processing (i.e. NLP processing like tokenization, 
 capitalization, regex parsing, ... ) 
 3. Insert the new result in another table. 
 
 Right now we are doing something like this: 
 
 for x in session.query(Foo).yield_per(1): 
   bar = Bar()
   bar.hello = x.world.lower()
   session.add(bar)
   session.flush()
 session.commit()

Do you really need to flush after making each new Bar? That implies a
database round-trip and state sync with SQLAlchemy.

In any case, you should gather a profile to see where/how time is getting
spent. SQLAlchemy is a complex framework, so whatever performance
assumptions are implied in the code may be wrong.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mysql_DEFAULT_CHARSET doesn't work with ForeignKeyConstraint func

2013-02-21 Thread Michael Bayer
what's won't work ?  error message?  stack trace ?  silent failure (if so 
what does SQL output say?)


On Feb 21, 2013, at 4:21 PM, junepeach juneyh...@gmail.com wrote:

 When run 'alembic upgrade head', the below won't work:
 
 
 revision = '2e76fbbd703c'
 down_revision = None
 
 from alembic import op
 import sqlalchemy as sa
 
 def upgrade():
op.create_table('test1',
sa.Column('id', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id'))
 
op.create_table('test2',
sa.Column('id', sa.String(length=20), nullable=False),
sa.Column('type_id', sa.Integer(), nullable=False),
sa.Column('description', sa.TEXT(), nullable=True),
sa.Column('whattime', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['type_id'], ['test1.id'], ),
sa.PrimaryKeyConstraint('id'),
mysql_ENGINE='InnoDB',
mysql_DEFAULT_CHARSET='ascii')
 
 def downgrade():
op.drop_table(test2)
op.drop_table(test1)
 ---
 But it will work if I removed 'sa.ForeignKeyConstraint(['type_id'], 
 ['test1.id'], ),', I don't know what causes that. My python and sqlelchemy 
 versions are:
 Python 2.7.3 (default, Sep 21 2012, 14:43:48)
 [GCC 4.4.3] on linux2
 Type help, copyright, credits or license for more information.
 import sqlalchemy
 print sqlalchemy.__version__
 0.8.0b2
 
 alembic==0.4.0
 
 Is there a bug or something I need to change to make it working.
 
 Thanks a lot.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread Victor Ng
Um sure. 

That still doesn't answer my question. 

I am interested to persist changes in my db as I am iterating through 
yield_per. 

 

On Thursday, February 21, 2013 1:03:49 PM UTC-8, A.M. wrote:

 On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng 
 vicn...@gmail.comjavascript: 

 wrote: 
  I do a lot of processing on large amount of data. 
  
  The common pattern we follow is: 
  
  1. Iterate through a large data set 
  2. Do some sort of processing (i.e. NLP processing like tokenization, 
  capitalization, regex parsing, ... ) 
  3. Insert the new result in another table. 
  
  Right now we are doing something like this: 
  
  for x in session.query(Foo).yield_per(1): 
bar = Bar() 
bar.hello = x.world.lower() 
session.add(bar) 
session.flush() 
  session.commit() 

 Do you really need to flush after making each new Bar? That implies a 
 database round-trip and state sync with SQLAlchemy. 

 In any case, you should gather a profile to see where/how time is getting 
 spent. SQLAlchemy is a complex framework, so whatever performance 
 assumptions are implied in the code may be wrong. 

 Cheers, 
 M 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread Simon King
On 21 Feb 2013, at 22:44, Victor Ng vicng...@gmail.com wrote:

 On Thursday, February 21, 2013 1:03:49 PM UTC-8, A.M. wrote:
 On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng vicn...@gmail.com 
 wrote: 
  I do a lot of processing on large amount of data. 
  
  The common pattern we follow is: 
  
  1. Iterate through a large data set 
  2. Do some sort of processing (i.e. NLP processing like tokenization, 
  capitalization, regex parsing, ... ) 
  3. Insert the new result in another table. 
  
  Right now we are doing something like this: 
  
  for x in session.query(Foo).yield_per(1): 
bar = Bar() 
bar.hello = x.world.lower() 
session.add(bar) 
session.flush() 
  session.commit() 
 
 Do you really need to flush after making each new Bar? That implies a 
 database round-trip and state sync with SQLAlchemy. 
 
 In any case, you should gather a profile to see where/how time is getting 
 spent. SQLAlchemy is a complex framework, so whatever performance 
 assumptions are implied in the code may be wrong. 
 
 Cheers, 
 M 
 
 Um sure. 
 
 That still doesn't answer my question. 
 
 I am interested to persist changes in my db as I am iterating through 
 yield_per. 
 


Do your Foo objects have an ordering that you can use, such as a numeric ID? If 
so, you could query for the first few hundred objects, process them, then do a 
new query for the next hundred, and so on. This should keep the memory usage of 
the process under control at least.

Hope that helps,

Simon


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread Michael Bayer

On Feb 21, 2013, at 3:52 PM, Victor Ng vicng...@gmail.com wrote:

 I do a lot of processing on large amount of data. 
 
 The common pattern we follow is: 
 
 1. Iterate through a large data set 
 2. Do some sort of processing (i.e. NLP processing like tokenization, 
 capitalization, regex parsing, ... ) 
 3. Insert the new result in another table. 
 
 Right now we are doing something like this: 
 
 for x in session.query(Foo).yield_per(1): 
   bar = Bar()
   bar.hello = x.world.lower()
   session.add(bar)
   session.flush()
 session.commit()
 
 This works, not great though. Typically, we will have to wait 30mins - 1hr to 
 see `bar`s being committed. 
 
 My question is: Is there a way that we can commit as we are iterating without 
 breaking yield_per? 
 
 If not, what is the recommended way of doing this? 

you can't really commit across a yield_per() that way since the yield_per() is 
holding onto an open cursor.   Committing the transaction assumes cursors and 
everything else are disposed.  This limitation is at least on the SQLAlchemy 
end and is likely a limiting factor with most DBAPIs as well (though I haven't 
tested each one).

The recommended pattern is to read the records in chunks using row limiting.   
My preferred method is that shown here: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery.  This 
approach has the caveat that you're on a decent database like Postgresql, SQL 
Server, or Oracle which supports window functions.  If not, you need to find 
some other way to define the chunks, such as if the table has 1M records and 
incrementing integer ids, chunk it in ranges of 1-1, 10001-2, etc.

The other way to chunk is to use LIMIT/OFFSET but this has the drawback that 
the OFFSET scans through all the preceding rows, and you need to ORDER BY on 
every query.  The window function approach only does one query with ORDER BY.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mysql_DEFAULT_CHARSET doesn't work with ForeignKeyConstraint func

2013-02-21 Thread junepeach
root@luck:/home/Documents# alembic upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade None - d28a086e79d
Traceback (most recent call last):
  File /usr/local/bin/alembic, line 9, in module
load_entry_point('alembic==0.4.0', 'console_scripts', 'alembic')()
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 255, in 
main
CommandLine(prog=prog).main(argv=argv)
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 250, in 
main
self.run_cmd(cfg, options)
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 241, in 
run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, 
in upgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 191, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 185, in 
load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File alembic/env.py, line 74, in module
run_migrations_online()
  File alembic/env.py, line 67, in run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
494, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 211, 
in run_migrations
change(**kw)
  File alembic/versions/removelater.py, line 32, in upgrade
mysql_ENGINE='InnoDB'
  File string, line 7, in create_table
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 
574, in create_table
self._table(name, *columns, **kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 148, 
in create_table
self._exec(schema.CreateTable(table))
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 75, 
in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
664, in execute
params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
723, in _execute_ddl
compiled
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
878, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
871, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, 
line 320, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1005, Can't create table 
'saintdb2.mytest1' (errno: 150)) '\nCREATE TABLE mytest1 (\n\tid INTEGER NOT 
NULL AUTO_INCREMENT, \n\tname VARCHAR(80) NOT NULL, \n\ttype_id INTEGER, 
\n\ttelephone TEXT, \n\tcontactname VARCHAR(45) NOT NULL, \n\tPRIMARY KEY (id), 
\n\tFOREIGN KEY(type_id) REFERENCES mytest (id), \n\tUNIQUE 
(name)\n)ENGINE=InnoDB DEFAULT CHARSET=ascii\n\n' ()

---
when removed foreignkey constraint, then got below:

root@luck:/home/Documents# alembic upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade None - d28a086e79d

Hope the output is helpful for your diagnosis of my issue. Thanks a lot. 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mysql_DEFAULT_CHARSET doesn't work with ForeignKeyConstraint func

2013-02-21 Thread Michael Bayer
one thing is that both tables need to be InnoDB (or both not, in which case the 
FK is moot).


On Feb 21, 2013, at 6:09 PM, junepeach juneyh...@gmail.com wrote:

 root@luck:/home/Documents# alembic upgrade head
 INFO  [alembic.migration] Context impl MySQLImpl.
 INFO  [alembic.migration] Will assume non-transactional DDL.
 INFO  [alembic.migration] Running upgrade None - d28a086e79d
 Traceback (most recent call last):
  File /usr/local/bin/alembic, line 9, in module
load_entry_point('alembic==0.4.0', 'console_scripts', 'alembic')()
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 255, 
 in main
CommandLine(prog=prog).main(argv=argv)
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 250, 
 in main
self.run_cmd(cfg, options)
  File /usr/local/lib/python2.7/dist-packages/alembic/config.py, line 241, 
 in run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, 
 in upgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 191, 
 in run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 185, in 
 load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File alembic/env.py, line 74, in module
run_migrations_online()
  File alembic/env.py, line 67, in run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
 494, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 
 211, in run_migrations
change(**kw)
  File alembic/versions/removelater.py, line 32, in upgrade
mysql_ENGINE='InnoDB'
  File string, line 7, in create_table
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 
 574, in create_table
self._table(name, *columns, **kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 148, 
 in create_table
self._exec(schema.CreateTable(table))
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 75, 
 in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
 line 664, in execute
params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
 line 723, in _execute_ddl
compiled
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
 line 878, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, 
 line 871, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, 
 line 320, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
 execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
 defaulterrorhandler
raise errorclass, errorvalue
 sqlalchemy.exc.OperationalError: (OperationalError) (1005, Can't create 
 table 'saintdb2.mytest1' (errno: 150)) '\nCREATE TABLE mytest1 (\n\tid 
 INTEGER NOT NULL AUTO_INCREMENT, \n\tname VARCHAR(80) NOT NULL, \n\ttype_id 
 INTEGER, \n\ttelephone TEXT, \n\tcontactname VARCHAR(45) NOT NULL, 
 \n\tPRIMARY KEY (id), \n\tFOREIGN KEY(type_id) REFERENCES mytest (id), 
 \n\tUNIQUE (name)\n)ENGINE=InnoDB DEFAULT CHARSET=ascii\n\n' ()
 
 ---
 when removed foreignkey constraint, then got below:
 
 root@luck:/home/Documents# alembic upgrade head
 INFO  [alembic.migration] Context impl MySQLImpl.
 INFO  [alembic.migration] Will assume non-transactional DDL.
 INFO  [alembic.migration] Running upgrade None - d28a086e79d
 
 Hope the output is helpful for your diagnosis of my issue. Thanks a lot. 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] is there a more proper way to chain dynamic or clauses ?

2013-02-21 Thread Jonathan Vanasco
basd on a bunch of error messages, this example works...

criteria = ( ('male',35),('female','35) )
query = session.query( model.Useraccount )
ands = []
for set_ in criteria :
ands.append(\
sqlalchemy.sql.expression.and_(\
model.Useraccoun.gender == set_[0] ,
model.Useraccoun.age == set_[1] ,
)
)
query = query.filter(\
sqlalchemy.sql.expression.or_( *ands )
)
results= query.all()

this seems really awkward though.  is there a better way to build up a
set of dynamic or criteria ?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] mysql_DEFAULT_CHARSET doesn't work with ForeignKeyConstraint func

2013-02-21 Thread junepeach
Thank you very much! It works now :)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Drop index doesn't update table metadata

2013-02-21 Thread Rick Harris
Hi all,

I recently ran into an issue where one of our `sqlalchemy-migrate` 
downgrade scripts was failing because it was trying to drop an index twice.

The root cause appears to be that SQLAlchemy's `idx.drop()` call does not 
update the `indexes` set for  the associated table.

I've attached a test case which hopefully explains the situation a little 
better.

My question is: is this expected behavior? Given that `index.create` 
mutates `table.indexes`, should we expect `index.drop` to do the inverse 
and remove it?

My naive assumption that it should, but if that wrong, I'd love to hear why.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


import sqlalchemy as sa


def test_drop_index():
Dropping an index does not remove its reference from associated tables.

The real-world breakage occurs when you combine this behavior w/
sqlalchemy-migrate's `drop_column`:

idx_on_colA.drop()
tableX.drop_column('colA') # - This will attempt to DROP INDEX again!

The problem is that when sqlalchemy-migrate goes to delete the column, it
erronoeously still sees the index present on `colA` and so attempts to
re-drop the index before dropping the column. Since the index from the
DB's perspective is no longer present, we get an error.

The workaround we have is:

idx_on_colA.drop()
tableX.indexes.remove(idx_on_colA)

See:
https://review.openstack.org/#/c/22628/1/nova/db/sqlalchemy/migrate_repo/versions/144_add_node_to_migrations.py,unified

Versions affected:
This affects 0.8.0b2 and earlier.

engine = sa.create_engine('sqlite:///')
engine.echo = True
meta = sa.MetaData(engine)
users = sa.Table('users', meta,
 sa.Column('id', sa.Integer, primary_key=True),
 sa.Column('email', sa.Unicode))

assert len(users.indexes) == 0
email_idx = sa.Index('idx_users_email', users.c.email)
meta.create_all()

assert len(users.indexes) == 1
email_idx.drop()

# This fails, sinces `users` retains a ref to `email_idx`
assert len(users.indexes) == 0, 'email_idx still present'


if __name__ == '__main__':
test_drop_index()