[web2py] Populating widgets with queries

2012-09-08 Thread Mike Girard
I have a checkboxes widget which I invoke like so:

form = SQLFORM.factory(
Field('test', 
type='string',
requires=IS_IN_DB(db, db.city.name_url, '%(name)s', multiple=True), 
widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
style='divs'),
default = 'New-York'), formstyle='divs')

I use requires=IS_IN_DB solely to populate the checkboxes with fresh data. 
I don't really need the validation. Now I would prefer to spread the data 
in the one table being used across multiple checkbox groups. Is there an 
out-of-the-box way to populate form elements with queries instead of just 
binding them to tables? 

-- 





[web2py] Re: Populating widgets with queries

2012-09-08 Thread Mike Girard
I should add that it would be easy for me to customize form elements this 
way, but I would like to do so in a way that does not sacrifice existing 
form functionality and is just generally in conformance with the web2py 
way. 

On Saturday, September 8, 2012 2:22:21 AM UTC-4, Mike Girard wrote:

 I have a checkboxes widget which I invoke like so:

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_DB(db, db.city.name_url, '%(name)s', 
 multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')

 I use requires=IS_IN_DB solely to populate the checkboxes with fresh data. 
 I don't really need the validation. Now I would prefer to spread the data 
 in the one table being used across multiple checkbox groups. Is there an 
 out-of-the-box way to populate form elements with queries instead of just 
 binding them to tables? 


-- 





Re: [web2py] Populating widgets with queries

2012-09-08 Thread Mike Girard
The IS_IN_DB etc didn't work for me -- perhaps I did something wrong -- but 
the variations with IS_IN_SET do. Thanks!

On Saturday, September 8, 2012 2:46:24 AM UTC-4, rochacbruno wrote:

 maybe this?

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_DB(*db(db.table.field == xyz)*, db.city.name_url, 
 '%(name)s', multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')

 or

 myset = [(value 1, text 1), (value 2, text 2), (value 2, text 
 2)]

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_SET(myset, multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')

 or

 myset = {value 1: text 1, value 2: text 2, value 2: text 2}

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_SET(myset, multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')

 or


 myset = db.executesql(SELECT value, text FROM sometable)

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_SET(myset, multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')


 *Bruno Cezar Rocha** - @rochacbruno*
 rocha...@gmail.com javascript: | Mobile: +55 (11) 99210-8821
 www.CursoDePython.com.br | www.rochacbruno.com.br
 Blog: Using Python to get all the external links from a 
 webpagehttp://rochacbruno.com.br/using-python-to-get-all-the-external-links-from-a-webpage/
   Get a signature like this. 
 http://r1.wisestamp.com/r/landing?promo=18dest=http%3A%2F%2Fwww.wisestamp.com%2Femail-install%3Futm_source%3Dextension%26utm_medium%3Demail%26utm_campaign%3Dpromo_18
  Click 
 here.http://r1.wisestamp.com/r/landing?promo=18dest=http%3A%2F%2Fwww.wisestamp.com%2Femail-install%3Futm_source%3Dextension%26utm_medium%3Demail%26utm_campaign%3Dpromo_18




 On Sat, Sep 8, 2012 at 3:22 AM, Mike Girard mikegi...@gmail.comjavascript:
  wrote:

 I have a checkboxes widget which I invoke like so:

 form = SQLFORM.factory(
 Field('test', 
 type='string',
 requires=IS_IN_DB(db, db.city.name_url, '%(name)s', 
 multiple=True), 
 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, 
 style='divs'),
 default = 'New-York'), formstyle='divs')

 I use requires=IS_IN_DB solely to populate the checkboxes with fresh 
 data. I don't really need the validation. Now I would prefer to spread the 
 data in the one table being used across multiple checkbox groups. Is there 
 an out-of-the-box way to populate form elements with queries instead of 
 just binding them to tables? 
  
 -- 
  
  
  





-- 





[web2py] Some executesql questions

2012-09-06 Thread Mike Girard
1. Is there a way to pass in parameters to the sql, in accordance with the 
psycopg2 recommendation? I don't see anything in the docstring about that. 
Does it matter?
2. Do the sql execution times reported by the toolbar include anything 
besides the execution time of the actual database query? Queries executed 
through web2py are taking 3 times longer or more than the same queries 
executed through PGadmin. They seem to be equally fast using psycopg2 via 
the python command line. 

-- 





[web2py] Re: Some executesql questions

2012-09-06 Thread Mike Girard
The db.executesql() method takes a placeholders argument, which can be a 
list, tuple, or dictionary of positional or named parameters to be filled 
in the query. This is the first thing mentioned in the docstring.

Doh! I swear I go selectively blind in a masochistic desire to make a fool 
of myself.  PLEASE believe me when I say I looked at the docs before 
asking. I am not good with docs. I do better with examples.



On Thursday, September 6, 2012 11:25:10 AM UTC-4, Anthony wrote:

 On Thursday, September 6, 2012 10:38:29 AM UTC-4, Mike Girard wrote:

 1. Is there a way to pass in parameters to the sql, in accordance with 
 the psycopg2 recommendation? I don't see anything in the docstring about 
 that. Does it matter?


 The db.executesql() method takes a placeholders argument, which can be a 
 list, tuple, or dictionary of positional or named parameters to be filled 
 in the query. This is the first thing mentioned in the docstring.
  

 2. Do the sql execution times reported by the toolbar include anything 
 besides the execution time of the actual database query? Queries executed 
 through web2py are taking 3 times longer or more than the same queries 
 executed through PGadmin. They seem to be equally fast using psycopg2 via 
 the python command line.


 Here's the code:

 t0 = time.time()
 ret = self.cursor.execute(*a, **b)
 self.db._timings.append((command,time.time()-t0))

 So, looks like it's just the database execution time (going through the 
 Python driver) . Not sure why it would differ from the command line.

 Anthony


-- 





[web2py] Checkbox widget with divs?

2012-09-06 Thread Mike Girard
The book says that the checkbox widget takes a 'style' argument for which 
'divs' is a possible value. What modification is required in the following 
code? Thanks.

form = SQLFORM.factory(
Field('test', 
type='string',
requires=IS_IN_DB(db, db.person.name_url, '%(name)s', 
multiple=True), 
widget=SQLFORM.widgets.checkboxes.widget,
default = 'Classic-Movies'), formstyle='divs')

-- 





[web2py] Re: Checkbox widget with divs?

2012-09-06 Thread Mike Girard
That worked perfectly with one minor alteration: 'formstyle' to 'style'

widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, style='divs')

On Thursday, September 6, 2012 3:40:45 PM UTC-4, Anthony wrote:

 Try

 widget=lambda f, v: SQLFORM.widgets.checkboxes.widget(f, v, formstyle=
 'divs')

 Anthony



-- 





[web2py] Re: web2py 2.0.2 is out

2012-08-30 Thread Mike Girard
Great news. Can't wait to try it out. Thanks, everyone!

On Wednesday, August 29, 2012 11:41:34 PM UTC-4, Massimo Di Pierro wrote:

 After 5 months. It is done. This is the most waited and the most 
 feature-packed release.

 I am sure we'll find some corners that need to be ironed but it is 
 considerably better than 1.99.7. It adds lot of new features and improves 
 many existing ones:

 - 57,000 new lines of code and closed 279 issues since 1.99.7.
 - Retrieving data from DB should be faster, in particular 
 select(cacheable=True)
 - Has a new scheduler, a built-in wiki, new language and pluralization 
 system, better markmin with oembed support and better scaffolding app, 
 increased security.
 - Lots of experimental features including GIS support, mongodb support, 
 built-in auth.wiki(), and more.

 Should be 100% backward compatible. If you run into any issue let us know 
 ASAP.

 I personally want to thank the major contributors to this release (in 
 alphabetic order)
 Alan, Andrew, Anthony, Bruno, Christian, Dave, Dominic, Iceberg, Jonathan, 
 Marc, Mariano, Marin, Martin, Mark, Michael, Michele, Niphlod, Patrick, 
 Vladyslav, 
 They spend many nights testing, coding, debugging at a very fast pace.

 Many many people have contributed.

 If your contribution has not been properly acknowledged please let us know 
 ASAP. It is probably an oversight.


 Massimo


 Detailed changelog
 ===

 ## 2.00.2

 ### DAL Improvements

 - Support for DAL(lazy_tables=True) and db.define_table(on_define=lambda 
 table:), thanks Jonathan
 - db(...).select(cacheable=True) make select 30% faster
 - db(...).select(cache=(cache.ram,3600)) now caches parsed data 100x faster
 - db(...).count(cache=(cache.ram,3600)) now supported
 - MongoDB support in DAL (experimental), thanks Mark Breedveld
 - geodal and spatialite, thanks Denes and Fran (experimental)
 - db.mytable._before_insert, _after_insert, _before_update, _after_update, 
 _before_delete. _after_delete (list of callbacks)
 - db(...).update_naive(...) same as update but ignores 
 table._before_update and table._after_update
 - DAL BIGINT support and DAL(...,bigint_id=True)
 - IS_IN_DB(..., distinct=True)
 - new syntax: db.mytable.insert(myuploadfield=open()), thank you 
 Iceberg
 - db(...).select(db.mytable.myfield.count(distinct=True))
 - db(db.a)._update(name=db(db.b.a==db.a.id).nested_select(db.b.id))
 - db.mytable.myfield.filter_in, filter_out
 - db.mytable._enable_record_versioning(db) adds versioning to this table
 - teradata adapter, thanks Andrew Willimott
 - experimental Sybase Adapter
 - added db.table.field.avg()
 - Support for Google App Engine projections, thanks Christian
 - Field(... 'upload', default=path) now accepts a path to a local file as 
 default value, if user does not upload a file. Relative path looks inside 
 current application folder, thanks Marin
 - executesql(...,fields=,columns=) allows parsing of results in Rows, 
 thanks Anthony

 ### Auth improvements

 - auth.enable_record_versioning(db)  adds full versioning to all tables
 - @auth.requires_login(otherwise=URL(...))
 - auth supports salt and compatible with third party data, thanks Dave 
 Stoll
 - CRYPT now defaults to pbkdf2(1000,20,sha1)
 - Built-in wiki with menu, tags, search, media, permissions. def index: 
 return auth.wiki()
 - auth.settings.everybody_group_id
 - allow storage of uploads on any PyFileSystem (including amazon)

 ### Form improvements

 - FORM.confirm('Are you sure?',{'Back':URL(...)})
 - SQLFORM.smartdictform(dict)
 - form.add_button(value,link)
 - SQLFORM.grid(groupby='...')
 - fixed security issue with SQLFORM.grid and SQLFORM.smartgrid
 - more export options in SQLFORM.grid and SQLFORM.smartgrid (html, xml, 
 csv, ...)

 ### Admin improvements

 - new admin pages: manage_students, bulk_regsiter, and progress reports
 - increased security in admin against CSRF
 - experimental Git integration
 - experimental OpenShift deployment
 - multi-language pluralization engine 
 - ace text web editor in admin
 - Ukrainian translations, thanks Vladyslav Kozlovskyy
 - Romanian translation for welcome, thanks ionel
 - support for mercurial 2.6, thanks Vlad

 ### Scheduler Improvements (thanks to niphlod, ykessler, dhx, toomim)

 - web2py.py -K myapp -X starts the myapp scheduler alongside the webserver
 - tasks are marked EXPIRED (if stop_time passed)
 - functions with no result don't end up in scheduler_run
 - more options: web2py.py -E -b -L
 - scheduler can now handle 10k tasks with 20 concurrent workers and with 
 no issues
 - new params:
 tasks can be found in the environment (no need to define the tasks 
 parameter)
 max_empty_runs kills the workers automatically if no new tasks are 
 found in queue (nice for spikes of processing power)
 discard_results to completely discard the results (if you don't need 
 the output of the task)
 utc_time enables datetime calculations with UTC time
 - scheduler_task changes:
 task_name is no 

[web2py] Usage example for 'executesql(...,fields=,columns=) allows parsing of results in Rows'

2012-08-30 Thread Mike Girard
So today's announcement of the new release included a suggestion that an 
executesql result can be returned as Rows. If this is true, can I see a 
usage example? A little puzzled by 'fields=,columns='? 

-- 





[web2py] Re: Usage example for 'executesql(...,fields=,columns=) allows parsing of results in Rows'

2012-08-30 Thread Mike Girard
Sorry, didn't mean to have people running errands for me. Where is the 
docstring? Was poking around 
here http://www.web2py.com/examples/static/epydoc/index.html and couldn't 
find what I was looking for. 

Too much newbieness i know: Python AND Web2py.

On Thursday, August 30, 2012 4:56:09 PM UTC-4, Anthony wrote:

 On Thursday, August 30, 2012 4:31:37 PM UTC-4, Mike Girard wrote:

 So today's announcement of the new release included a suggestion that an 
 executesql result can be returned as Rows. If this is true, can I see a 
 usage example? A little puzzled by 'fields=,columns='?


 db.define_table('person', Field('name'), Field('email'))
 db.define_table('dog', Field('name'), Field('owner', 'reference person'))
 db.executesql([SQL code returning person.name and dog.name fields], fields
 =[db.person.name, db.dog.name])
 db.executesql([SQL code returning all fields from db.person], fields=db.
 person)
 db.executesql([SQL code returning all fields from both tables], fields=[db
 .person, db.dog])
 db.executesql([SQL code returning person.name and all db.dog fields],fields
 =[db.person.name, db.dog])

 Here's the docstring:

 Added 2012-08-24 fields optional argument. If not None, the
 results cursor returned by the DB driver will be converted to a
 DAL Rows object using the db._adapter.parse() method. Thisrequires
 specifying the fields argument as a list of DAL Field objects
 that match the fields returned from the DB. The Field objects 
 should
 be part of one or more Table objects defined on the DAL object.
 The fields list can include one or more DAL Table objects inaddition
 to or instead of including Field objects, or it can be just a 
 single
 table (not in a list). In that case, the Field objects will be
 extracted from the table(s).

 The field names will be extracted from the Field objects, oroptionally
 ,
 a list of field names can be provided (in tablename.fieldname 
 format)
 via the colnames argument. Note, the fields and colnames must 
 be in
 the same order as the fields in the results cursor returned fromthe DB
 .

 Anthony



-- 





[web2py] Re: Usage example for 'executesql(...,fields=,columns=) allows parsing of results in Rows'

2012-08-30 Thread Mike Girard
It's screaming invalid literals anyway. Think I'll skip it. 

On Thursday, August 30, 2012 5:07:19 PM UTC-4, Niphlod wrote:

 right doc location. executesql is a method of dal, so click here and there 
 and you'll end up at 
 http://www.web2py.com/examples/static/epydoc/web2py.gluon.dal.DAL-class.html#executesql

 On Thursday, August 30, 2012 11:00:30 PM UTC+2, Mike Girard wrote:

 Sorry, didn't mean to have people running errands for me. Where is the 
 docstring? Was poking around here 
 http://www.web2py.com/examples/static/epydoc/index.html and couldn't 
 find what I was looking for. 

 Too much newbieness i know: Python AND Web2py.

 On Thursday, August 30, 2012 4:56:09 PM UTC-4, Anthony wrote:

 On Thursday, August 30, 2012 4:31:37 PM UTC-4, Mike Girard wrote:

 So today's announcement of the new release included a suggestion that 
 an executesql result can be returned as Rows. If this is true, can I see a 
 usage example? A little puzzled by 'fields=,columns='?


 db.define_table('person', Field('name'), Field('email'))
 db.define_table('dog', Field('name'), Field('owner', 'reference person'
 ))
 db.executesql([SQL code returning person.name and dog.name fields],fields
 =[db.person.name, db.dog.name])
 db.executesql([SQL code returning all fields from db.person], fields=db.
 person)
 db.executesql([SQL code returning all fields from both tables], fields=[
 db.person, db.dog])
 db.executesql([SQL code returning person.name and all db.dog fields],fields
 =[db.person.name, db.dog])

 Here's the docstring:

 Added 2012-08-24 fields optional argument. If not None, the
 results cursor returned by the DB driver will be converted to a
 DAL Rows object using the db._adapter.parse() method. Thisrequires
 specifying the fields argument as a list of DAL Field objects
 that match the fields returned from the DB. The Field objects 
 should
 be part of one or more Table objects defined on the DAL object.
 The fields list can include one or more DAL Table objects 
 inaddition
 to or instead of including Field objects, or it can be just a 
 single
 table (not in a list). In that case, the Field objects will be
 extracted from the table(s).

 The field names will be extracted from the Field objects, 
 oroptionally
 ,
 a list of field names can be provided (in tablename.fieldname 
 format)
 via the colnames argument. Note, the fields and colnames must 
 be in
 the same order as the fields in the results cursor returned fromthe 
 DB
 .

 Anthony



-- 





[web2py] Re: Usage example for 'executesql(...,fields=,columns=) allows parsing of results in Rows'

2012-08-30 Thread Mike Girard
I hadn't put my fields in order nor told it about my aliases. Now that i 
have, it's dreamy. You are my hero. 

On Thursday, August 30, 2012 6:03:26 PM UTC-4, Anthony wrote:

 On Thursday, August 30, 2012 5:24:34 PM UTC-4, Mike Girard wrote:

 It's screaming invalid literals anyway. Think I'll skip it.


 Tell us more. If something isn't working, we'd rather fix it than skip it. 
 :-)

 Anthony 


-- 





[web2py] Re: web2py 2.0.2 is out

2012-08-30 Thread Mike Girard
This seems like a good place to say that leaving php/Drupal for 
Python/web2py has been like getting out of jail. This is great software 
with a supernaturally patient and helpful community. 

On Wednesday, August 29, 2012 11:41:34 PM UTC-4, Massimo Di Pierro wrote:

 After 5 months. It is done. This is the most waited and the most 
 feature-packed release.

 I am sure we'll find some corners that need to be ironed but it is 
 considerably better than 1.99.7. It adds lot of new features and improves 
 many existing ones:

 - 57,000 new lines of code and closed 279 issues since 1.99.7.
 - Retrieving data from DB should be faster, in particular 
 select(cacheable=True)
 - Has a new scheduler, a built-in wiki, new language and pluralization 
 system, better markmin with oembed support and better scaffolding app, 
 increased security.
 - Lots of experimental features including GIS support, mongodb support, 
 built-in auth.wiki(), and more.

 Should be 100% backward compatible. If you run into any issue let us know 
 ASAP.

 I personally want to thank the major contributors to this release (in 
 alphabetic order)
 Alan, Andrew, Anthony, Bruno, Christian, Dave, Dominic, Iceberg, Jonathan, 
 Marc, Mariano, Marin, Martin, Mark, Michael, Michele, Niphlod, Patrick, 
 Vladyslav, 
 They spend many nights testing, coding, debugging at a very fast pace.

 Many many people have contributed.

 If your contribution has not been properly acknowledged please let us know 
 ASAP. It is probably an oversight.


 Massimo


 Detailed changelog
 ===

 ## 2.00.2

 ### DAL Improvements

 - Support for DAL(lazy_tables=True) and db.define_table(on_define=lambda 
 table:), thanks Jonathan
 - db(...).select(cacheable=True) make select 30% faster
 - db(...).select(cache=(cache.ram,3600)) now caches parsed data 100x faster
 - db(...).count(cache=(cache.ram,3600)) now supported
 - MongoDB support in DAL (experimental), thanks Mark Breedveld
 - geodal and spatialite, thanks Denes and Fran (experimental)
 - db.mytable._before_insert, _after_insert, _before_update, _after_update, 
 _before_delete. _after_delete (list of callbacks)
 - db(...).update_naive(...) same as update but ignores 
 table._before_update and table._after_update
 - DAL BIGINT support and DAL(...,bigint_id=True)
 - IS_IN_DB(..., distinct=True)
 - new syntax: db.mytable.insert(myuploadfield=open()), thank you 
 Iceberg
 - db(...).select(db.mytable.myfield.count(distinct=True))
 - db(db.a)._update(name=db(db.b.a==db.a.id).nested_select(db.b.id))
 - db.mytable.myfield.filter_in, filter_out
 - db.mytable._enable_record_versioning(db) adds versioning to this table
 - teradata adapter, thanks Andrew Willimott
 - experimental Sybase Adapter
 - added db.table.field.avg()
 - Support for Google App Engine projections, thanks Christian
 - Field(... 'upload', default=path) now accepts a path to a local file as 
 default value, if user does not upload a file. Relative path looks inside 
 current application folder, thanks Marin
 - executesql(...,fields=,columns=) allows parsing of results in Rows, 
 thanks Anthony

 ### Auth improvements

 - auth.enable_record_versioning(db)  adds full versioning to all tables
 - @auth.requires_login(otherwise=URL(...))
 - auth supports salt and compatible with third party data, thanks Dave 
 Stoll
 - CRYPT now defaults to pbkdf2(1000,20,sha1)
 - Built-in wiki with menu, tags, search, media, permissions. def index: 
 return auth.wiki()
 - auth.settings.everybody_group_id
 - allow storage of uploads on any PyFileSystem (including amazon)

 ### Form improvements

 - FORM.confirm('Are you sure?',{'Back':URL(...)})
 - SQLFORM.smartdictform(dict)
 - form.add_button(value,link)
 - SQLFORM.grid(groupby='...')
 - fixed security issue with SQLFORM.grid and SQLFORM.smartgrid
 - more export options in SQLFORM.grid and SQLFORM.smartgrid (html, xml, 
 csv, ...)

 ### Admin improvements

 - new admin pages: manage_students, bulk_regsiter, and progress reports
 - increased security in admin against CSRF
 - experimental Git integration
 - experimental OpenShift deployment
 - multi-language pluralization engine 
 - ace text web editor in admin
 - Ukrainian translations, thanks Vladyslav Kozlovskyy
 - Romanian translation for welcome, thanks ionel
 - support for mercurial 2.6, thanks Vlad

 ### Scheduler Improvements (thanks to niphlod, ykessler, dhx, toomim)

 - web2py.py -K myapp -X starts the myapp scheduler alongside the webserver
 - tasks are marked EXPIRED (if stop_time passed)
 - functions with no result don't end up in scheduler_run
 - more options: web2py.py -E -b -L
 - scheduler can now handle 10k tasks with 20 concurrent workers and with 
 no issues
 - new params:
 tasks can be found in the environment (no need to define the tasks 
 parameter)
 max_empty_runs kills the workers automatically if no new tasks are 
 found in queue (nice for spikes of processing power)
 discard_results to completely discard the results (if you don't 

[web2py] Help with aliases in queries

2012-08-29 Thread Mike Girard
Hello, 

Could someone tell me how I should modify the following to alias the 
results for actor and director?  Both actors and directors consist of a 
movie_id and a person_id, linking them up respectively to a person.id and a 
movie.id. 

db(db.movie.title == 'Citizen Kane').select(db.movie.title, *db**.person.name 
as star***, *db**.person.name as director***,
left = [db.actor.on(db.movie.id == db.actor.movie_id),
db.person.on(db.actor.person_id == db.person.id),
db.director.on(db.movie.id == db.director.movie_id),
db.person.on(db.director.person_id == db.person.id)]
)



Not crufting this up with my model since I think this is a syntax issue. My 
model is very standard many-to-many. 




-- 





[web2py] Re: Help with aliases in queries

2012-08-29 Thread Mike Girard
To clarify, I would like the DAL syntax that produces a query along the 
following lines, where joins are using the same lookup table. 

SELECT  movie.title, star.name as star, dir.name as director
FROM movie 
LEFT JOIN actor ON (movie.id = actor.movie_id) 
LEFT JOIN person as star ON (actor.person_id = star.id) 
LEFT JOIN director ON (movie.id = director.movie_id) 
LEFT JOIN person as dir ON (director.person_id = dir.id) 
ORDER BY movie.average_rating, movie.id LIMIT 1000 OFFSET 0;



On Wednesday, August 29, 2012 6:10:46 PM UTC-4, Mike Girard wrote:

 Hello, 

 Could someone tell me how I should modify the following to alias the 
 results for actor and director?  Both actors and directors consist of a 
 movie_id and a person_id, linking them up respectively to a person.id and 
 a movie.id. 

 db(db.movie.title == 'Citizen Kane').select(db.movie.title, *db**.person.name 
 as star***, *db**.person.name as director***,
 left = [db.actor.on(db.movie.id == db.actor.movie_id),
 db.person.on(db.actor.person_id == db.person.id),
 db.director.on(db.movie.id == db.director.movie_id),
 db.person.on(db.director.person_id == db.person.id)]
 )



 Not crufting this up with my model since I think this is a syntax issue. 
 My model is very standard many-to-many. 






-- 





[web2py] Re: Help with aliases in queries

2012-08-29 Thread Mike Girard
Perfect. 

I had seen 'with_alias' explained elsewhere, but couldn't figure out 
precisely how to use it. 

Thanks so much!

On Wednesday, August 29, 2012 6:55:09 PM UTC-4, Massimo Di Pierro wrote:

 In web2py you can alias tables but not fields just because that is all you 
 need to resolve ambigious joins. The aliasing of fields is not necessary 
 functionally. You want:

 star = db.person.with_alias('star')
 dir = db.person.with_alias('dir')

 db(db.movie.title == 'Citizen Kane').select(db.movie.title, star*.name as
  star***, dir*.name,*
 left = [db.actor.on(db.movie.id == db.actor.movie_id),
 star.on(db.actor.person_id == star.id),
 db.director.on(db.movie.id == db.director.movie_id),
 dir.on(db.director.person_id == dir.id)]

 On Wednesday, 29 August 2012 17:10:46 UTC-5, Mike Girard wrote:

 Hello, 

 Could someone tell me how I should modify the following to alias the 
 results for actor and director?  Both actors and directors consist of a 
 movie_id and a person_id, linking them up respectively to a person.idand a 
 movie.id. 

 db(db.movie.title == 'Citizen Kane').select(db.movie.title, 
 *db**.person.name 
 as star***, *db**.person.name as director***,
 left = [db.actor.on(db.movie.id == db.actor.movie_id),
 db.person.on(db.actor.person_id == db.person.id),
 db.director.on(db.movie.id == db.director.movie_id),
 db.person.on(db.director.person_id == db.person.id)]
 )



 Not crufting this up with my model since I think this is a syntax issue. 
 My model is very standard many-to-many. 






-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-23 Thread Mike Girard
Thanks for all your help, Anthony.

I'd like to cast a vote for moving bug fixes into stable as soon as possible, 
regardless of what's on the horizon. I keep seeing replies on this board 
directing people to trunk, which likely has new bugs.  I think this is at odds 
with adoption. 

-- 





[web2py] how to parse xml in web2py?

2012-08-23 Thread Mike Girard
ElementTree or cElementTree are good. CElementTree is said to be faster.

The most important thing is the method you use. If your sample is the extent of 
the document, any parser will do.  However, if your actual documents are large, 
you should consider an iterparsing method to conserve memory.

I think the most robust parser in terms of methods is lxml, which supports XSL 
and XPath along with  ElementTrees.  However, it requires separate 
installation. Python ships with ElementTree and cElementTree which are fine for 
most cases. 

-- 





[web2py] General knowledge questions about reference fields

2012-08-23 Thread Mike Girard
Still working on optimizing updates. 

Could someone provide high-level answers to the following?

1. When you do a select that includes a reference field, how does it fetch, 
say, the name column, when only the id column is in the table you are doing 
a select on?

2. For many-to-many relationships, how does list:reference compare in 
performance and convenience with using linking tables and joins? 

3. My list:reference fields hold a series of integer values but they do so 
in a column of type 'text' . They are strings of integers separated with 
'|'. If I were going to use SQL, rather than DAL methods to update a 
list:reference field of this kind, would inserting a string like this - 
'|100|300|450|' - work the same as a field updated by the DAL with a list? 
Or is the DAL doing something else behind the scenes when you update the 
field that would be excluded from a simple SQL update. 

Grateful for any input.

Thanks. 

-- 





[web2py] Re: General knowledge questions about reference fields

2012-08-23 Thread Mike Girard
Every question answered. Thanks a lot, Anthony!

On Thursday, August 23, 2012 3:27:07 PM UTC-4, Anthony wrote:

 1. When you do a select that includes a reference field, how does it 
 fetch, say, the name column, when only the id column is in the table you 
 are doing a select on?


 Unless you do a join, it does not fetch the name column (presumably you 
 mean the name column of the referenced table) at the time of the initial 
 select. However, if you access the name attribute of one of the returned 
 Row objects, it will do a query to fetch the name field for that 
 particular record. If you check the type of the reference field of the Row 
 object, you'll see that it is a DAL Reference object -- this holds the 
 value of the referenced record id but also knows how to fetch the 
 associated record from the referenced table. This is called a recursive 
 select http://web2py.com/books/default/chapter/29/6#Recursive-selects. 
 Note, because recursive selects involve a separate query for each record, 
 if you need the referenced fields for many records, it is more efficient to 
 do a single query with a join instead.
  

 3. My list:reference fields hold a series of integer values but they do 
 so in a column of type 'text' . They are strings of integers separated with 
 '|'. If I were going to use SQL, rather than DAL methods to update a 
 list:reference field of this kind, would inserting a string like this - 
 '|100|300|450|' - work the same as a field updated by the DAL with a list? 
 Or is the DAL doing something else behind the scenes when you update the 
 field that would be excluded from a simple SQL update.


 I don't think the DAL is doing anything special other than creating a 
 string as you describe above, so you should be able to make your own 
 inserts using SQL if you like.

 Anthony 


-- 





[web2py] Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
I have written a script to parse a large xml file and insert the contents 
in to my app db. I am using lxml.

After about 10 records get inserted, the script fails with 
type 'exceptions.TypeError' argument of type 'NoneType' is not iterable

Troubleshooting determined the following:

1. This error is associated with 3 list:reference fields. When I remove 
them from the script, the script executes uneventfully. If any one of them 
is included, it fails.
2. This only happens after 10 records have been successfully inserted.
3. There is no discernible difference between the records that get 
successfully added and those that don't. The error happens even when I hard 
code the lists for the list:reference field. It seems to be associated with 
number of records, rather than which records. 
4. The script executes successfully when I change the field types from 
'list:reference' to 'string' and insert strings instead of lists. You 
should not assume from this that there is a data issue. As I said, 
hardcoded lists get rejected also. I am 99% certain valid data is not the 
issue.
5. This happens in both SQLLite and Postgres

Here is the model declaration for one of the three fields. They are all 
analogous:

Field('genres','string','list:reference genre', requires=IS_IN_DB(db, 
'genre.id', '%(name)s [%(id)s]', multiple=True))

Here is how I update each new row in the  the database:

db.movies.validate_and_insert(**movie) (movie is a dict)

Here is how I hardcoded values into the fields: movie['genre'] = {456, 368, 
239]

Now, if someone doesn't have a solution, can they tell me if I can 

1.Programmatically remove the list:reference from the model prior to data 
updates and programmatically restore it afterwards?
2. Retain all the functionality of these fields by toggling this way? 

Seriously considering going the join table route and skipping the list 
reference fields. Are there any gotchas there? 




-- 





[web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
There were a couple typos in my post that do not correspond to my code.

Should be movie['genre*s*'] = *[*456, 368, 239]

On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the contents 
 in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I remove 
 them from the script, the script executes uneventfully. If any one of them 
 is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I hard 
 code the lists for the list:reference field. It seems to be associated with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types from 
 'list:reference' to 'string' and insert strings instead of lists. You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are all 
 analogous:

 Field('genres','string','list:reference genre', requires=IS_IN_DB(db, '
 genre.id', '%(name)s [%(id)s]', multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(**movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = {456, 
 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior to data 
 updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way? 

 Seriously considering going the join table route and skipping the list 
 reference fields. Are there any gotchas there? 






-- 





[web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
One other troubleshooting detail:

I did type(obj) on the dict, 'movie' as well as all the items inside it. 
They all check out. No 'None's anywhere.

On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the contents 
 in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I remove 
 them from the script, the script executes uneventfully. If any one of them 
 is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I hard 
 code the lists for the list:reference field. It seems to be associated with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types from 
 'list:reference' to 'string' and insert strings instead of lists. You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are all 
 analogous:

 Field('genres','string','list:reference genre', requires=IS_IN_DB(db, '
 genre.id', '%(name)s [%(id)s]', multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(**movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = {456, 
 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior to data 
 updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way? 

 Seriously considering going the join table route and skipping the list 
 reference fields. Are there any gotchas there? 






-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
Yeah. A result of switching it off and then back on.

Good catch.

On Wed, Aug 22, 2012 at 3:19 PM, Anthony abasta...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the contents
 in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with
 type 'exceptions.TypeError' argument of type 'NoneType' is not iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I remove
 them from the script, the script executes uneventfully. If any one of them
 is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get
 successfully added and those that don't. The error happens even when I hard
 code the lists for the list:reference field. It seems to be associated with
 number of records, rather than which records.
 4. The script executes successfully when I change the field types from
 'list:reference' to 'string' and insert strings instead of lists. You
 should not assume from this that there is a data issue. As I said,
 hardcoded lists get rejected also. I am 99% certain valid data is not the
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are all
 analogous:

 Field('genres','string','list:**reference genre', requires=IS_IN_DB(db, '
 genre.id', '%(name)s [%(id)s]', multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = {456,
 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can

 1.Programmatically remove the list:reference from the model prior to data
 updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way?

 Seriously considering going the join table route and skipping the list
 reference fields. Are there any gotchas there?




  --





-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
Yeah, I should have included a small script. The script I am using is 
fairly large, though. 

Here is an outline of my script which should hit the most salient details. 

movie = {}

for tag in tags:
   movie[title] =  #parsed from xml directly
   
   movie[genres] = # genre names pulled from xml. If new, added to genre 
table.  IDs returned as list with this ids = [r.id for r in 
db(db.genre.api_id.belongs(api_vals)).select(db.genre.id)] Object tested 
for type and values
   
   movie[actors] = #same as genre but  uses person table for movie.actors
   
   movie[directors] = #same as genre but  uses  person for movie.directors 
(both actors/directors use person table. Could this be a prob?)

   db.validate_and_insert(**movie)





   
   












On Wednesday, August 22, 2012 3:37:11 PM UTC-4, Niphlod wrote:

 Don't get if it is fixed, anyway I can't reproduce.
 BTW: it's always better to post a simple script to reproduce the issue 
 (simpler for you to check for copy/paste errors and for who helps)

 db.py

 db.define_table('genres',
 Field('name')
 )

 db.define_table('movies',
 Field('name'),
 Field('genres','string','list:reference genres', requires=
 IS_IN_DB(db, 'genres.id', '%(name)s [%(id)s]', multiple=True))
 )




 shell
  db.genres.insert(name='horror')
 1
  db.genres.insert(name='comedy')
 2
  db.movies.insert(name='terminator', genres=[1,2])
 1
  db.movies.insert(name='alien', genres=[2])
 2
  db.movies.insert(name='alien2', genres=[2])
 3
  print db(db.movies.id0).select()
 movies.id,movies.name,movies.genres
 1,terminator,|1|2|
 2,alien,|2|
 3,alien2,|1|

  db.movies.insert(name='alien2', genres=[1,6])
 4
  print db(db.movies.id0).select()
 movies.id,movies.name,movies.genres
 1,terminator,|1|2|
 2,alien,|2|
 3,alien2,|1|
 4,alien2,|1|6|

  db.movies.validate_and_insert(name='alien2', genres=[1])
 Row {'errors': Row {}, 'id': 5}
  db.movies.validate_and_insert(name='alien3', genres=[1])
 Row {'errors': Row {}, 'id': 6}
  db.movies.validate_and_insert(name='alien3', genres=[1,2])
 Row {'errors': Row {}, 'id': 7}
  print db(db.movies.id0).select()
 movies.id,movies.name,movies.genres
 1,terminator,|1|2|
 2,alien,|2|
 3,alien2,|1|
 4,alien2,|1|6|
 5,alien2,|1|
 6,alien3,|1|
 7,alien3,|1|2|

  db.movies.validate_and_insert(name='alien3', genres=[1,6])
 Row {'errors': Row {'genres': 'value not in database'}, 'id': None}





 All is working ok. 

   


-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
Yeah, I get the references directly from the genre/person tables before 
inserting them in movie. I also test to make sure the item being inserted 
 is a list with data. Printing contents after the error shows nothing odd. 

I have tested this in both Postgres and SQLlite. Same thing happens in both.



On Wednesday, August 22, 2012 3:45:31 PM UTC-4, villas wrote:

 Did you try checking that all the references exist?  
 So, maybe it is a referential integrity problem.
 You are prob not using Sqlite but a proper DB.
 Just ideas,  D


 On Wednesday, August 22, 2012 8:23:23 PM UTC+1, Mike Girard wrote:

 Yeah. A result of switching it off and then back on. 

 Good catch. 

 On Wed, Aug 22, 2012 at 3:19 PM, Anthony abas...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the 
 contents in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not 
 iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I remove 
 them from the script, the script executes uneventfully. If any one of them 
 is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I 
 hard 
 code the lists for the list:reference field. It seems to be associated 
 with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types from 
 'list:reference' to 'string' and insert strings instead of lists. You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are all 
 analogous:

 Field('genres','string','list:**reference genre', 
 requires=IS_IN_DB(db, 'genre.id', '%(name)s [%(id)s]', multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = {456, 
 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior to 
 data updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way? 

 Seriously considering going the join table route and skipping the list 
 reference fields. Are there any gotchas there? 




  -- 
  
  
  




-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
Traceback (most recent call last):
  File /Applications/web2pystable/gluon/restricted.py, line 205, in restricted
exec ccode in environment
  File /Applications/web2pystable/applications/myapp/controllers/default.py
http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
line 199, in module
  File /Applications/web2pystable/gluon/globals.py, line 173, in lambda
self._caller = lambda f: f()
  File /Applications/web2pystable/applications/myapp/controllers/default.py
http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
line 44, in load_data
rows = parse_and_load()
  File /Applications/web2pystable/applications/myapp/models/movie.py
http://127.0.0.1:8000/admin/default/edit/movietooth/models/movie.py,
line 246, in parse_and_load
msg = db.movie.validate_and_insert(**movie)
  File /Applications/web2pystable/gluon/dal.py, line 6836, in
validate_and_insert
value,error = self[key].validate(value)
  File /Applications/web2pystable/gluon/dal.py, line 7434, in validate
(value, error) = validator(value)
  File /Applications/web2pystable/gluon/validators.py, line 484, in __call__
if not [x for x in values if not str(x) in self.theset]:
TypeError: argument of type 'NoneType' is not iterable


On Wed, Aug 22, 2012 at 4:36 PM, Anthony abasta...@gmail.com wrote:

 Can you show the actual traceback so we can see where the code is failing?
 Perhaps the problem is in the field validator.

 Anthony


 On Wednesday, August 22, 2012 4:10:40 PM UTC-4, Mike Girard wrote:

 Yeah, I get the references directly from the genre/person tables before
 inserting them in movie. I also test to make sure the item being inserted
  is a list with data. Printing contents after the error shows nothing odd.

 I have tested this in both Postgres and SQLlite. Same thing happens in
 both.



 On Wednesday, August 22, 2012 3:45:31 PM UTC-4, villas wrote:

 Did you try checking that all the references exist?
 So, maybe it is a referential integrity problem.
 You are prob not using Sqlite but a proper DB.
 Just ideas,  D


 On Wednesday, August 22, 2012 8:23:23 PM UTC+1, Mike Girard wrote:

 Yeah. A result of switching it off and then back on.

 Good catch.

 On Wed, Aug 22, 2012 at 3:19 PM, Anthony abas...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:**reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the
 contents in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with
 type 'exceptions.TypeError' argument of type 'NoneType' is not
 iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I
 remove them from the script, the script executes uneventfully. If any one
 of them is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get
 successfully added and those that don't. The error happens even when I 
 hard
 code the lists for the list:reference field. It seems to be associated 
 with
 number of records, rather than which records.
 4. The script executes successfully when I change the field types
 from 'list:reference' to 'string' and insert strings instead of lists. 
 You
 should not assume from this that there is a data issue. As I said,
 hardcoded lists get rejected also. I am 99% certain valid data is not the
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are
 all analogous:

 Field('genres','string','list:reference genre',
 requires=IS_IN_DB(db, 'genre.id', '%(name)s [%(id)s]',
 multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(**movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] =
 {456, 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can

 1.Programmatically remove the list:reference from the model prior to
 data updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way?

 Seriously considering going the join table route and skipping the
 list reference fields. Are there any gotchas there?




  --





  --





-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
I probably am doing something wrong, but you have yet to simulate the case 
originally described, which is a loop that fails after about 10 iterations 
with data that is not discernibly different.



On Wednesday, August 22, 2012 4:32:39 PM UTC-4, Niphlod wrote:

 You are doing something wronga different model, some strange value 
 type...etc.
 Tested with dict automatic unpacking.

  movie = dict(name='alien4', genres=[1,2])
  db.movies.validate_and_insert(**movie)
 Row {'errors': Row {}, 'id': 4}
  movie = dict(name='alien4', genres=[1,7])
  db.movies.validate_and_insert(**movie)
 Row {'errors': Row {'genres': 'value not in database'}, 'id': None}


 Could you at least print one movie you're trying to insert that fails in 
 your script ? standard print, simplejson.dump()ed and pprint.pprint()ed, 
 directly before validate_and_insert(**movie). Just to pinpoint the issue.

 On Wednesday, August 22, 2012 10:10:40 PM UTC+2, Mike Girard wrote:

 Yeah, I get the references directly from the genre/person tables before 
 inserting them in movie. I also test to make sure the item being inserted 
  is a list with data. Printing contents after the error shows nothing odd. 

 I have tested this in both Postgres and SQLlite. Same thing happens in 
 both.



 On Wednesday, August 22, 2012 3:45:31 PM UTC-4, villas wrote:

 Did you try checking that all the references exist?  
 So, maybe it is a referential integrity problem.
 You are prob not using Sqlite but a proper DB.
 Just ideas,  D


 On Wednesday, August 22, 2012 8:23:23 PM UTC+1, Mike Girard wrote:

 Yeah. A result of switching it off and then back on. 

 Good catch. 

 On Wed, Aug 22, 2012 at 3:19 PM, Anthony abas...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the 
 contents in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not 
 iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I 
 remove them from the script, the script executes uneventfully. If any 
 one 
 of them is included, it fails.
 2. This only happens after 10 records have been successfully inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I 
 hard 
 code the lists for the list:reference field. It seems to be associated 
 with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types 
 from 'list:reference' to 'string' and insert strings instead of lists. 
 You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not 
 the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are 
 all analogous:

 Field('genres','string','list:**reference genre', 
 requires=IS_IN_DB(db, 'genre.id', '%(name)s [%(id)s]', 
 multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = 
 {456, 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior to 
 data updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this way? 

 Seriously considering going the join table route and skipping the 
 list reference fields. Are there any gotchas there? 




  -- 
  
  
  




-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
Will copying over dal.py suffice?

What should I be using generally to avoid losing hours and sleep? Trunk or 
stable? I fought with this thing for a long time before posting. 

On Wednesday, August 22, 2012 5:09:08 PM UTC-4, Anthony wrote:

 This is a bug that was fixed: 
 http://code.google.com/p/web2py/issues/detail?id=647

 Try trunk.

 Anthony

 On Wednesday, August 22, 2012 4:44:13 PM UTC-4, Mike Girard wrote:

 Traceback (most recent call last):

   File /Applications/web2pystable/gluon/restricted.py, line 205, in 
 restricted

 exec ccode in environment
   File 
 /Applications/web2pystable/applications/myapp/controllers/default.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
  line 199, in module

   File /Applications/web2pystable/gluon/globals.py, line 173, in lambda

 self._caller = lambda f: f()

   File 
 /Applications/web2pystable/applications/myapp/controllers/default.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
  line 44, in load_data

 rows = parse_and_load()
   File /Applications/web2pystable/applications/myapp/models/movie.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/models/movie.py, line 
 246, in parse_and_load

 msg = db.movie.validate_and_insert(**movie)

   File /Applications/web2pystable/gluon/dal.py, line 6836, in 
 validate_and_insert

 value,error = self[key].validate(value)

   File /Applications/web2pystable/gluon/dal.py, line 7434, in validate

 (value, error) = validator(value)

   File /Applications/web2pystable/gluon/validators.py, line 484, in 
 __call__

 if not [x for x in values if not str(x) in self.theset]:
 TypeError: argument of type 'NoneType' is not iterable


 On Wed, Aug 22, 2012 at 4:36 PM, Anthony abas...@gmail.com wrote:

 Can you show the actual traceback so we can see where the code is 
 failing? Perhaps the problem is in the field validator.

 Anthony


 On Wednesday, August 22, 2012 4:10:40 PM UTC-4, Mike Girard wrote:

 Yeah, I get the references directly from the genre/person tables before 
 inserting them in movie. I also test to make sure the item being inserted 
  is a list with data. Printing contents after the error shows nothing odd. 

 I have tested this in both Postgres and SQLlite. Same thing happens in 
 both.



 On Wednesday, August 22, 2012 3:45:31 PM UTC-4, villas wrote:

 Did you try checking that all the references exist?  
 So, maybe it is a referential integrity problem.
 You are prob not using Sqlite but a proper DB.
 Just ideas,  D


 On Wednesday, August 22, 2012 8:23:23 PM UTC+1, Mike Girard wrote:

 Yeah. A result of switching it off and then back on. 

 Good catch. 

 On Wed, Aug 22, 2012 at 3:19 PM, Anthony abas...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:**reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the 
 contents in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not 
 iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I 
 remove them from the script, the script executes uneventfully. If any 
 one 
 of them is included, it fails.
 2. This only happens after 10 records have been successfully 
 inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I 
 hard 
 code the lists for the list:reference field. It seems to be associated 
 with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types 
 from 'list:reference' to 'string' and insert strings instead of lists. 
 You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not 
 the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are 
 all analogous:

 Field('genres','string','list:reference genre', 
 requires=IS_IN_DB(db, 'genre.id', '%(name)s [%(id)s]', 
 multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(**movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = 
 {456, 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior 
 to data updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this 
 way? 

 Seriously considering going the join table route and skipping the 
 list reference fields. Are there any gotchas there? 




  -- 
  
  
  


  -- 
  
  
  




-- 





Re: [web2py] Re: Loading data into list:reference fields eliciting strange NoneType not iterable' error

2012-08-22 Thread Mike Girard
I switched to trunk and then imported my app. Everything seems to be 
working.

That was painful. 

Next time I get a cryptic error I should probably post it immediately.

On Wednesday, August 22, 2012 5:09:08 PM UTC-4, Anthony wrote:

 This is a bug that was fixed: 
 http://code.google.com/p/web2py/issues/detail?id=647

 Try trunk.

 Anthony

 On Wednesday, August 22, 2012 4:44:13 PM UTC-4, Mike Girard wrote:

 Traceback (most recent call last):

   File /Applications/web2pystable/gluon/restricted.py, line 205, in 
 restricted

 exec ccode in environment
   File 
 /Applications/web2pystable/applications/myapp/controllers/default.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
  line 199, in module

   File /Applications/web2pystable/gluon/globals.py, line 173, in lambda

 self._caller = lambda f: f()

   File 
 /Applications/web2pystable/applications/myapp/controllers/default.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/controllers/default.py,
  line 44, in load_data

 rows = parse_and_load()
   File /Applications/web2pystable/applications/myapp/models/movie.py 
 http://127.0.0.1:8000/admin/default/edit/movietooth/models/movie.py, line 
 246, in parse_and_load

 msg = db.movie.validate_and_insert(**movie)

   File /Applications/web2pystable/gluon/dal.py, line 6836, in 
 validate_and_insert

 value,error = self[key].validate(value)

   File /Applications/web2pystable/gluon/dal.py, line 7434, in validate

 (value, error) = validator(value)

   File /Applications/web2pystable/gluon/validators.py, line 484, in 
 __call__

 if not [x for x in values if not str(x) in self.theset]:
 TypeError: argument of type 'NoneType' is not iterable


 On Wed, Aug 22, 2012 at 4:36 PM, Anthony abas...@gmail.com wrote:

 Can you show the actual traceback so we can see where the code is 
 failing? Perhaps the problem is in the field validator.

 Anthony


 On Wednesday, August 22, 2012 4:10:40 PM UTC-4, Mike Girard wrote:

 Yeah, I get the references directly from the genre/person tables before 
 inserting them in movie. I also test to make sure the item being inserted 
  is a list with data. Printing contents after the error shows nothing odd. 

 I have tested this in both Postgres and SQLlite. Same thing happens in 
 both.



 On Wednesday, August 22, 2012 3:45:31 PM UTC-4, villas wrote:

 Did you try checking that all the references exist?  
 So, maybe it is a referential integrity problem.
 You are prob not using Sqlite but a proper DB.
 Just ideas,  D


 On Wednesday, August 22, 2012 8:23:23 PM UTC+1, Mike Girard wrote:

 Yeah. A result of switching it off and then back on. 

 Good catch. 

 On Wed, Aug 22, 2012 at 3:19 PM, Anthony abas...@gmail.com wrote:

 I assume this is a typo too, but just to be sure:

 Field('genres','string','list:**reference genre', ...

 You have both 'string' and 'list:reference' there.

 Anthony


 On Wednesday, August 22, 2012 2:49:29 PM UTC-4, Mike Girard wrote:

 I have written a script to parse a large xml file and insert the 
 contents in to my app db. I am using lxml.

 After about 10 records get inserted, the script fails with 
 type 'exceptions.TypeError' argument of type 'NoneType' is not 
 iterable

 Troubleshooting determined the following:

 1. This error is associated with 3 list:reference fields. When I 
 remove them from the script, the script executes uneventfully. If any 
 one 
 of them is included, it fails.
 2. This only happens after 10 records have been successfully 
 inserted.
 3. There is no discernible difference between the records that get 
 successfully added and those that don't. The error happens even when I 
 hard 
 code the lists for the list:reference field. It seems to be associated 
 with 
 number of records, rather than which records. 
 4. The script executes successfully when I change the field types 
 from 'list:reference' to 'string' and insert strings instead of lists. 
 You 
 should not assume from this that there is a data issue. As I said, 
 hardcoded lists get rejected also. I am 99% certain valid data is not 
 the 
 issue.
 5. This happens in both SQLLite and Postgres

 Here is the model declaration for one of the three fields. They are 
 all analogous:

 Field('genres','string','list:reference genre', 
 requires=IS_IN_DB(db, 'genre.id', '%(name)s [%(id)s]', 
 multiple=True))

 Here is how I update each new row in the  the database:

 db.movies.validate_and_insert(**movie) (movie is a dict)

 Here is how I hardcoded values into the fields: movie['genre'] = 
 {456, 368, 239]

 Now, if someone doesn't have a solution, can they tell me if I can 

 1.Programmatically remove the list:reference from the model prior 
 to data updates and programmatically restore it afterwards?
 2. Retain all the functionality of these fields by toggling this 
 way? 

 Seriously considering going the join table route and skipping the 
 list reference fields. Are there any gotchas there? 




  -- 
  
  
  


  -- 
  
  
  




-- 





Re: [web2py] Re: Best way to insert 200k records?

2012-08-20 Thread Mike Girard
Hi Andrew:

Thanks for the explanation.  I am going to assume that database neutrality 
is the main selling point of sticking with the DAL. 

I second your interest in a DAL method for genuine bulk loading, though I 
reckon that would be quite a beastly project. 

I appreciate everyone's input. This community is uniquely helpful. 

On Monday, August 20, 2012 1:49:46 AM UTC-4, Andrew wrote:

 HI Martin,
 It depends on the RDBMS.  Some are still one row at a time, which makes 
 insert and bulk_insert the same speed (it just makes the statement easier 
 to write.

 Hi MIke,
 One of the goals of the DAL is to make the api database neutral, allowing 
 you to switch between databases without changing your code (hopefully).  
 The thing you sacrifice if you use a bulk native loader (today) is that you 
 are locking yourself into a specific database platform.  The api for the 
 DAL doesn't have any platform specific features (I think), although some 
 features don't apply to all.

 What I was suggesting was a Native Load method which is defined within 
 each database adapter as they will all be different.  Just a thought 
 although unlike ANSI SQL, every separate platform probably has their own 
 syntax for their bulk loader.

 Reiterating,  I think bulk loading is a thing you would do as a batch / 
 scheduler process.  It's not what you'd use with your web end-user app.  If 
 that's the case, does it make sense for it to be a part of the DAL,  or as 
 perhaps separate (contrib) modules targetting specific platforms ?


 On Monday, August 20, 2012 1:36:14 PM UTC+12, Mike Girard wrote:

 bulk insert is a way faster than regular insert when you have many rows

 I think we need to clarify terms. By Massimo's own account in the web2py 
 book, the DAL bulk insert is not faster than db.insert unless you are using 
 the GAE. So are you talking about your db's native bulk methods or is the 
 book wrong?

 Could someone just please answer what, if anything, is being sacrificed 
 when you use your database's own bulk loading methods instead of using the 
 DAL? Why the DAL religion about this?

 On Sunday, August 19, 2012 5:09:43 PM UTC-4, Martin.Mulone wrote:

 bulk insert is a way faster than regular insert when you have many rows. 
 If you are under mysql you can use load data infile, this is incredible 
 fast, but you need special privileges under mysql.

 2012/8/19 Andrew awill...@gmail.com

 Is it possible that we add a native bulk insert function which is 
 coded up in each adapter.  Even bulk_insert is an odbc 1 row at a 
 time-slow 
 for big files.  I need to load huge files all the time and I am writing 
 custom modules to do this with a native loader.  Should this be a dal 
 option?  Worth noting that this type of operation is a batch, back end 
 thing,  I wouldn't do this for a end user web app.

 I would expect that each DBMS needs different info to start a bulk 
 load, so the interface may be tricky, or just pass a dict and let the 
 adapter work it out.
 What do you think?

 --






 -- 
  http://www.tecnodoc.com.ar



-- 





Re: [web2py] Where to host web2py

2012-08-19 Thread Mike Girard
Wow, those are great prices for VPS.

Was there anything besides price that made you choose them? 

On Sunday, August 19, 2012 5:42:32 AM UTC-4, Simon Carr wrote:

 I have just found this service

 http://webkeepers.com

 I think i will give their medium service a try on the one month contract 
 and see how it goes.  The 4gb version seems very reasonable even on the 
 monthly rates and better still if i decide to stick with them for a year.  
 My only worry is that i am in the uk and so will most of my 
 visitors/customers so latency might become an issue. 

 I also have another option.  I have virgin fibreoptic broadband with 120mb 
 download and 10mb upload and a static ip.  I have tried pointing my domain 
 name at my home router before and it works fine.  The only issue here is 
 that i would have to invest in a new pc and the computer room shares a wall 
 with the bedroom so the fan noise might become an issue at night. 

 Will give webkeepers a try and report back with results. 

 Simon
 On 19 Aug 2012 00:54, Simon Carr simon...@gmail.com javascript: 
 wrote:

 After a few weeks of getting to know web2py i have decided that it should 
 become one of the development tools in my tool belt.

 The only thing that is stopping me moving on however is hosting options. 
 I am going to take a look at app engine as one option but i need to know 
 that i can also deploy on a standard web server.

 I would need to be able to use apache which i know web2py can do but i am 
 not sure how complicated this is. I also think that I am going to need to 
 use a VPS but these go up in price very quickly beyond 1gb and 1 cpu.

 Can anyone give some comments on where they host,  what spec server they 
 have and what performance they get.

 Thanks
 Simon

 --



  

-- 





Re: [web2py] Re: Best way to insert 200k records?

2012-08-19 Thread Mike Girard
bulk insert is a way faster than regular insert when you have many rows

I think we need to clarify terms. By Massimo's own account in the web2py 
book, the DAL bulk insert is not faster than db.insert unless you are using 
the GAE. So are you talking about your db's native bulk methods or is the 
book wrong?

Could someone just please answer what, if anything, is being sacrificed 
when you use your database's own bulk loading methods instead of using the 
DAL? Why the DAL religion about this?

On Sunday, August 19, 2012 5:09:43 PM UTC-4, Martin.Mulone wrote:

 bulk insert is a way faster than regular insert when you have many rows. 
 If you are under mysql you can use load data infile, this is incredible 
 fast, but you need special privileges under mysql.

 2012/8/19 Andrew awill...@gmail.com javascript:

 Is it possible that we add a native bulk insert function which is coded 
 up in each adapter.  Even bulk_insert is an odbc 1 row at a time-slow for 
 big files.  I need to load huge files all the time and I am writing custom 
 modules to do this with a native loader.  Should this be a dal option? 
  Worth noting that this type of operation is a batch, back end thing,  I 
 wouldn't do this for a end user web app.

 I would expect that each DBMS needs different info to start a bulk load, 
 so the interface may be tricky, or just pass a dict and let the adapter 
 work it out.
 What do you think?

 --






 -- 
  http://www.tecnodoc.com.ar



-- 





[web2py] DAL field rules and form constraints

2012-08-18 Thread Mike Girard
In preparation for doing a bulk insert into my app's tables I have been 
doing some testing from the web2py shell.

I issued the following command:

db.person.insert(name = 'Dustin Hoffman')

person.name has a unique=true setting as well as an IS_NOT_IN_DB form 
constraint. Dustin Hoffman is already in the database. 

The response for running this command was an id number (118)  suggesting 
the record had been added. However, when I looked at the person records via 
/myapplication/appadmin, it wasn't there and the id of the last record was 
117. This is fine, suggesting that the uniqueness constraint had been 
applied.

However, when I went back to the shell and issued this command -

 db.person(118) 

- It returned a row.  Also

 db.person(name='Dustin Hoffman').count
 2

So it seems as if the duplicate record is in the db, though it doesn't show 
up in the browser.

Two questions, then:

1. Why is the duplicate record not showing up in /myapplication/appadmin
2. If I want a script where duplicates are just rejected without special 
handling, should I just use insert_and_validate? What if I am doing a bulk 
insert or a csv import? 

-- 





[web2py] Re: DAL field rules and form constraints

2012-08-18 Thread Mike Girard
Ok, so I brought up a new shell and the duplicate record was gone.

So can I confirm that field constraints get enforced with db.insert and 
therefore, any DAL method I use for inserting records is sound in this 
respect? I don't want to do a huge data dump and then have to do a lot of 
post-processing for duplicate records.

Thanks. 

On Saturday, August 18, 2012 2:50:18 PM UTC-4, Mike Girard wrote:

 In preparation for doing a bulk insert into my app's tables I have been 
 doing some testing from the web2py shell.

 I issued the following command:

 db.person.insert(name = 'Dustin Hoffman')

 person.name has a unique=true setting as well as an IS_NOT_IN_DB form 
 constraint. Dustin Hoffman is already in the database. 

 The response for running this command was an id number (118)  suggesting 
 the record had been added. However, when I looked at the person records via 
 /myapplication/appadmin, it wasn't there and the id of the last record was 
 117. This is fine, suggesting that the uniqueness constraint had been 
 applied.

 However, when I went back to the shell and issued this command -

  db.person(118) 

 - It returned a row.  Also

  db.person(name='Dustin Hoffman').count
  2

 So it seems as if the duplicate record is in the db, though it doesn't 
 show up in the browser.

 Two questions, then:

 1. Why is the duplicate record not showing up in /myapplication/appadmin
 2. If I want a script where duplicates are just rejected without special 
 handling, should I just use insert_and_validate? What if I am doing a bulk 
 insert or a csv import? 



-- 





[web2py] Re: DAL field rules and form constraints

2012-08-18 Thread Mike Girard
I did do db.commit from the shell. Still no duplicate in the appadmin. 

On Saturday, August 18, 2012 3:12:39 PM UTC-4, Anthony wrote:

 In the shell, the transaction won't actually be committed until you do 
 db.commit() (in your app code, you do not have to call db.commit() because 
 it will be called automatically at the end of the request). Also, when you 
 use .insert(), the validators don't run -- they only run when using SQLFORM 
 or when you insert via .validate_and_insert().

 Anthony

 On Saturday, August 18, 2012 2:50:18 PM UTC-4, Mike Girard wrote:

 In preparation for doing a bulk insert into my app's tables I have been 
 doing some testing from the web2py shell.

 I issued the following command:

 db.person.insert(name = 'Dustin Hoffman')

 person.name has a unique=true setting as well as an IS_NOT_IN_DB form 
 constraint. Dustin Hoffman is already in the database. 

 The response for running this command was an id number (118)  suggesting 
 the record had been added. However, when I looked at the person records via 
 /myapplication/appadmin, it wasn't there and the id of the last record was 
 117. This is fine, suggesting that the uniqueness constraint had been 
 applied.

 However, when I went back to the shell and issued this command -

  db.person(118) 

 - It returned a row.  Also

  db.person(name='Dustin Hoffman').count
  2

 So it seems as if the duplicate record is in the db, though it doesn't 
 show up in the browser.

 Two questions, then:

 1. Why is the duplicate record not showing up in /myapplication/appadmin
 2. If I want a script where duplicates are just rejected without special 
 handling, should I just use insert_and_validate? What if I am doing a bulk 
 insert or a csv import? 



-- 





[web2py] global name 'psycopg2_adapt' is not defined

2012-08-18 Thread Mike Girard
I am attempting to move from SQLLite to Postgres because I was having table 
locking issues.. 

I downloaded and installed Postgres and the Psycopg2-2.4.5. They work. I am 
able to administer Postgres and the installation of the driver was 
uneventful. 

I first encountered the self-drivers error (documented elsewhere in this 
group) and changed gluon/dal.py in accordance with the instructions. After 
doing that, the home page of my app loaded up as did default/appadmin. 
However. when I attempted to add a record to the database, I got the error 
 global name 'psycopg2_adapt' is not defined. 

Is there something else I need to do?

I am running the latest stable release from source. 

Thanks. 

-- 





[web2py] Re: global name 'psycopg2_adapt' is not defined

2012-08-18 Thread Mike Girard
Apparently there is something wrong with my installation of the driver. 
Going to see if I can sort that before coming back. Consider this closed. 

On Saturday, August 18, 2012 8:16:36 PM UTC-4, Mike Girard wrote:

 I am attempting to move from SQLLite to Postgres because I was having 
 table locking issues.. 

 I downloaded and installed Postgres and the Psycopg2-2.4.5. They work. I 
 am able to administer Postgres and the installation of the driver was 
 uneventful. 

 I first encountered the self-drivers error (documented elsewhere in this 
 group) and changed gluon/dal.py in accordance with the instructions. After 
 doing that, the home page of my app loaded up as did default/appadmin. 
 However. when I attempted to add a record to the database, I got the error 
  global name 'psycopg2_adapt' is not defined. 

 Is there something else I need to do?

 I am running the latest stable release from source. 

 Thanks. 



-- 





[web2py] Re: global name 'psycopg2_adapt' is not defined

2012-08-18 Thread Mike Girard
In case anyone else encounters problems installing psycopg2 on a Mac. The 
problem was created by outdated libcrypto and libssl  lib files in 
/usr/lib. Fixed the problem by copying the newer files from 
PostgreSQL/9.1/lib, deleting aliases pointing at the old files and creating 
new ones to replace them.

Example: 

sudo ln -s /usr/lib/libssl.1.0.0.dylib /usr/lib/libssl.dylib

after making sure that libssl.1.0.0.dylib had been copied into /usr/lib and 
the old libssl.dylib alias had been removed.

A little apprehensive about how this will affect some of my other 
applications, but at least web2py is now working with Postgres.


On Saturday, August 18, 2012 8:38:27 PM UTC-4, Mike Girard wrote:

 Apparently there is something wrong with my installation of the driver. 
 Going to see if I can sort that before coming back. Consider this closed. 

 On Saturday, August 18, 2012 8:16:36 PM UTC-4, Mike Girard wrote:

 I am attempting to move from SQLLite to Postgres because I was having 
 table locking issues.. 

 I downloaded and installed Postgres and the Psycopg2-2.4.5. They work. I 
 am able to administer Postgres and the installation of the driver was 
 uneventful. 

 I first encountered the self-drivers error (documented elsewhere in this 
 group) and changed gluon/dal.py in accordance with the instructions. After 
 doing that, the home page of my app loaded up as did default/appadmin. 
 However. when I attempted to add a record to the database, I got the error 
  global name 'psycopg2_adapt' is not defined. 

 Is there something else I need to do?

 I am running the latest stable release from source. 

 Thanks. 



-- 





[web2py] Re: Where to host web2py

2012-08-18 Thread Mike Girard
Hi Simon:

I know from having reviewed the trail on this issue in this group that 
other members have been successful in getting web2py working on Webfaction 
and hostgator.  Webfaction in particular seems to get high marks and their 
service seems to have some VPS-like features, though it is not VPS.

On Saturday, August 18, 2012 7:54:47 PM UTC-4, Simon Carr wrote:

 After a few weeks of getting to know web2py i have decided that it should 
 become one of the development tools in my tool belt. 

 The only thing that is stopping me moving on however is hosting options. I 
 am going to take a look at app engine as one option but i need to know that 
 i can also deploy on a standard web server. 

 I would need to be able to use apache which i know web2py can do but i am 
 not sure how complicated this is. I also think that I am going to need to 
 use a VPS but these go up in price very quickly beyond 1gb and 1 cpu. 

 Can anyone give some comments on where they host,  what spec server they 
 have and what performance they get. 

 Thanks
 Simon



-- 





[web2py] Best way to insert 200k records?

2012-08-17 Thread Mike Girard
Hello: 

What is the prescribed method for doing large bulk inserts? I am using sqllite 
as my backend. 

Is it ok to just write a Python script that talks to the db directly or should 
the DAL be used? The book says the bulk_insert method is not more advantageous 
than a for loop. 

I searched this group for an answer but didn't find anything definitive. 

Thanks.

PS Kudos to the web2py creator and contributors. Every day I am struck by how 
elegant, easy and well-designed it is. 

-- 





Re: [web2py] Best way to insert 200k records?

2012-08-17 Thread Mike Girard
But what method should I use? Is there anything better than bulk_insert? 
Naturally I would like this to work as fast as possible.

Thanks. 

-- 





[web2py] Re: Best way to insert 200k records?

2012-08-17 Thread Mike Girard
The data will be coming from a large XML file, so my script will parse that and 
make inserts into several different tables. It's fairly straightforward.

So is it correct to say that -

1. There is no compelling reason to do this without the DAL 
2.  My options in the DAL are bulk_insert, looping db.query and csv import and 
that performance wise they're similar? 

-- 





[web2py] Re: Best way to insert 200k records?

2012-08-17 Thread Mike Girard
10 minutes is all right. Still, if there are two methods and they are 
qualitatively equal apart from speed, I'd prefer to use the faster one.

So posing the question, once again, is there a compelling reason to use the 
DAL for bulk uploads. More specficially, what extras, if any, are being 
added by the DAL for inserts that a bulk insert using SQLLite directly 
won't add?


On Friday, August 17, 2012 12:58:01 PM UTC-4, nick name wrote:

 On Friday, August 17, 2012 8:29:12 AM UTC-4, Mike Girard wrote:

 The data will be coming from a large XML file, so my script will parse 
 that and make inserts into several different tables. It's fairly 
 straightforward.

 So is it correct to say that -

 1. There is no compelling reason to do this without the DAL 
 2.  My options in the DAL are bulk_insert, looping db.query and csv 
 import and that performance wise they're similar? 

 2 is correct (as long as you are going through the DAL; db.executesql 
 would be the non-DAL way from within web2py - and of course, you could use 
 your DB's native facilities)

 1. Is correct if you are not doing this often - e.g., it might take 10 
 minutes as opposed to 1 minute without DAL (just assuming, not based on any 
 actual measurement). So what?

 If you do this once an hour, then DAL processing and the individual record 
 insertion (even if you use bulk_insert or csv) might make it too slow for 
 you, and you would be better off looking at your database's native bulk 
 loading facilities.


-- 





[web2py] Re: Congratulations to Bruno and Mariano

2012-08-17 Thread Mike Girard
Congratulations, guys. 

I appreciate all the work you do. 

On Friday, August 17, 2012 10:13:45 AM UTC-4, Massimo Di Pierro wrote:

 Congratulations to Bruno Rocha and Mariano Reingart,

 New members of the Python Software Foundations.

 http://pyfound.blogspot.com.br/2012/08/welcome-new-psf-members.html




-- 





[web2py] Get row count for query without loading data into memory

2012-08-17 Thread Mike Girard
Does this load the rows into memory? 

db(db.person.id  0).count()


If so, is there a DAL way to get a row count for a query without loading 
the rows? 

Thanks.

Mike



-- 





[web2py] Re: Get row count for query without loading data into memory

2012-08-17 Thread Mike Girard
Wow. You guys are so fast.

After posting, I found out about the Toolbar and looked at the SQL myself.

I feel stupid for asking. What threw me off was some code I saw of 
Massimo's where he used raw SQL to get a record count. I found myself 
wondering why. 



On Friday, August 17, 2012 6:00:20 PM UTC-4, Cliff Kachinske wrote:

 I'm not really a DAL guru, but I can't imagine DAL doing that.

 Every dbms I know about has something like SELECT COUNT(id) WHERE 
 ...query



 On Friday, August 17, 2012 5:54:32 PM UTC-4, Mike Girard wrote:

 Does this load the rows into memory? 

 db(db.person.id  0).count()


 If so, is there a DAL way to get a row count for a query without loading 
 the rows? 

 Thanks.

 Mike





-- 





Re: [web2py] Re: Helper for many-to-many result sets?

2012-08-14 Thread Mike Girard
Salient bits of model at the top of the thread.

On Tuesday, August 14, 2012 2:16:56 AM UTC-4, rochacbruno wrote:

 I guess you can also do:

 table = TABLE(*[TR(TD(movie.title), TD(UL(*[LI(star.name) for star in *
 movie.stars.select()*])) for movie in movies])

 when a table is referenced by another, he gets a DAL Set object with the 
 referer name.

 movie.*stars *will be a DAL Set. which has *select, update, delete*methods

 But it will only happens depending on your model and relations definition.

 Can you share your model for those 2 tables?


 On Tue, Aug 14, 2012 at 3:10 AM, Bruno Rocha rocha...@gmail.comjavascript:
  wrote:

 Foreach movie in movies
 print movie.title
 foreach stars in movie.stars
print star.name




-- 





Re: [web2py] Re: Helper for many-to-many result sets?

2012-08-14 Thread Mike Girard
This all looks promising. Meanwhile, here's my model with irrelevant Fields 
omitted and below that my query for the Select. Thanks for the help!

A simplified version of my model:

db.define_table('movie',

Field('title','string'),

 db.define_table('person', 

 Field('name', 'string', unique=True),

db.define_table('star', 

 Field('movie_id', db.movie),

 Field('person_id', db.person),


movies_and_stars = db((db.movie.id == db.star.movie_id)  
(db.star.person_id == db.person.id))

On Tuesday, August 14, 2012 2:16:56 AM UTC-4, rochacbruno wrote:

 I guess you can also do:

 table = TABLE(*[TR(TD(movie.title), TD(UL(*[LI(star.name) for star in *
 movie.stars.select()*])) for movie in movies])

 when a table is referenced by another, he gets a DAL Set object with the 
 referer name.

 movie.*stars *will be a DAL Set. which has *select, update, delete*methods

 But it will only happens depending on your model and relations definition.

 Can you share your model for those 2 tables?


 On Tue, Aug 14, 2012 at 3:10 AM, Bruno Rocha rocha...@gmail.comjavascript:
  wrote:

 Foreach movie in movies
 print movie.title
 foreach stars in movie.stars
print star.name




-- 





[web2py] Re: Trouble adding lxml module to application

2012-08-13 Thread Mike Girard
Are you sure? I searched on this list and seemed that there had been some 
problems. In light of this I installed lxml in my 2.5 site-packages and ran 
web2py with the python2.5 command recommended on the installation page. 

Can someone confirm what Python version is best for running web2py as of 
now? 

Thanks. 

On Monday, August 13, 2012 2:01:35 AM UTC-4, pbreit wrote:

 Web2py runs fine (better?) on 2.6 and 2.7.

-- 





[web2py] Re: Trouble adding lxml module to application

2012-08-13 Thread Mike Girard
Thanks for the confirmation. Perhaps the official documentation should be 
updated. 

On Monday, August 13, 2012 9:17:51 AM UTC-4, Anthony wrote:

 Are you sure? I searched on this list and seemed that there had been some 
 problems.


 Shouldn't be any problems. 2.5 is the *oldest* version of Python with 
 which web2py will work (used to be 2.4), but it works fine with 2.6 and 2.7.

 Anthony 


-- 





[web2py] Re: Trouble adding lxml module to application

2012-08-13 Thread Mike Girard
The book also provides this command for running web2py from source.

python2.5 web2py.py

it was this that made me think I had to use 2.5 along with presumably 
obsolete posts in this group about issues that came up with new Python 
versions. I suppose if I hadn't been skimming, I would have felt more 
confident that 2.6 was supported. 

Anyway, I'm good to go now. Thanks for the help. 





On Monday, August 13, 2012 9:36:35 AM UTC-4, Anthony wrote:

 The book says:

 web2py runs with CPython (the C implementation) and Jython (the Java 
 implementation), on Python versions 2.4, 2.5, 2.6, and 2.7, although 
 officially it only supports 2.5 so that we can guarantee backward 
 compatibility for applications.

 I guess that sounds a bit misleading, as 2.6 and 2.7 are also officially 
 supported. It's just that the framework is written to target 2.5, so the 
 framework code doesn't include any language features that were new in 2.6 
 or 2.7 (of course, your app code can include such features if you're 
 running 2.6 or 2.7). I think we may need to remove 2.4 from that list as 
 well.

 Anthony


 On Monday, August 13, 2012 9:27:36 AM UTC-4, Mike Girard wrote:

 Thanks for the confirmation. Perhaps the official documentation should be 
 updated. 

 On Monday, August 13, 2012 9:17:51 AM UTC-4, Anthony wrote:

 Are you sure? I searched on this list and seemed that there had been 
 some problems.


 Shouldn't be any problems. 2.5 is the *oldest* version of Python with 
 which web2py will work (used to be 2.4), but it works fine with 2.6 and 2.7.

 Anthony 



-- 





Re: [web2py] retrieve data from many to many relation

2012-08-13 Thread Mike Girard
I am sure that this answer is exactly what I am looking for to solve a 
problem I am having. However, I don't quite understand the particulars. 
Vincenzo, could you please provide an example of the queries used here to 
get the movie linked up with the genre. Maybe some code? 

On Saturday, July 28, 2012 5:25:33 PM UTC-4, Vincenzo Ampolo wrote:

 On 07/28/2012 03:38 AM, Alec Taylor wrote: 
  Vincenzo: Have you looked at: https://github.com/niphlod/w2p_tvseries 

 What is that ? I can't figure out from the objectives section what 
 that application does... 

 On 07/28/2012 05:37 AM, Cliff Kachinske wrote: You need to define a set 
 that includes all three tables in the relation. 
  
  ((db.genres.id==db.movies_genres.genres_id)  
  (db.movies_genres.movie_id==db.movies.id)) 
  
  To get the set of genres for a specific movie, do 
  
  | 
  ((db.genres.id==db.movies_genres.genres_id) 
  (db.movies_genres.movie_id==db.movies.id) 
  (db.movies.id==1))# or whatever variable you like 
  | 
  
  
  Notice this query allows you to retrieve information from all three 
  tables at one time. 

 Which is exactly like the web2py books suggests. Btw to retrieve all the 
 information of a movie in one shot (so join movie with genres with casts 
 with pictures) may be really database intense (3 joins!) So after a bit 
 of hacking I figured out a simply way to accomplish the same goal: 

 web2py automatically add the many-to-many sets to an instance with the 
 same name of the relation table. In my case I've movie.movies_genres for 
 each movie. Given so I can easily get the genres of a movie in a 
 template just doing. 

 In [36]: for m in movie.movies_genres.select(): print m.genre.name 
 Thriller 

 Best Regards, 

 -- 
 Vincenzo Ampolo 
 http://vincenzo-ampolo.net 
 http://goshawknest.wordpress.com 


-- 





[web2py] Helper for many-to-many result sets?

2012-08-13 Thread Mike Girard
 

I have a movie table that has a many-to-many relationship with a person 
table expressed through a star table.

A simplified version of my model:

db.define_table('movie',

Field('title','string'),

 db.define_table('person', 

 Field('name', 'string', unique=True),

db.define_table('star', 

 Field('movie_id', db.movie),

 Field('person_id', db.person),

I am able to create a select that joins the three tables and produces a 
result with all the data I need.

It's easy to iterate through the result and produce something akin to this:

Movie Title  Star 1

Movie Title  Star 2

Movie Title  Star 3


What I want is:

Movie Title 1

Star 1, Star 2, Star 3


Movie Title 2

Star 1, Star 2, Star 3


Programmatically, I'd like something like:

for each movie in rows

h2move.title/h2

 ul

  for each star in movie.stars

  listar.name/li


Before I write a function to pre-process the result, can someone tell me if 
there is a helper for producing a result of this kind? 

This thread addresses the same issue - 

https://groups.google.com/forum/?fromgroups#!topic/web2py/GQsMt4qvqSs

- but I was unable to discern the solution the question asker had produced 
for himself, the key to which was this:

web2py automatically add the many-to-many sets to an instance with the 

same name of the relation table

I do not know what that means. 

-- 





Re: [web2py] Helper for many-to-many result sets?

2012-08-13 Thread Mike Girard
I don't think you have understood my question.

My concern is not that I have too many fields nor my field names. 

I am asking if there is an easy way to produce a data structure that lends 
itself to the output I want:

Foreach movie in movies
print movie.title
foreach stars in movie.stars
   print star.name



On Monday, August 13, 2012 6:47:13 PM UTC-4, Alec Taylor wrote:

 It sounds like you want to reduce the fields shown in the results, since 
 you already know some of the information.

 However the additional queries, though smaller, will require more 
 resources than a simple join. So I recommend just sticking with the join, 
 and displaying in your view what you'd like.

 As for the wrapper to give things easier to work with names, just specify 
 said information in your controller.
 On 14/08/2012 8:04 AM, Mike Girard mikegi...@gmail.com javascript: 
 wrote:

 I have a movie table that has a many-to-many relationship with a person 
 table expressed through a star table.

 A simplified version of my model:

 db.define_table('movie',

 Field('title','string'),

  db.define_table('person', 

  Field('name', 'string', unique=True),

 db.define_table('star', 

  Field('movie_id', db.movie),

  Field('person_id', db.person),

 I am able to create a select that joins the three tables and produces a 
 result with all the data I need.

 It's easy to iterate through the result and produce something akin to 
 this:

 Movie Title  Star 1

 Movie Title  Star 2

 Movie Title  Star 3


 What I want is:

 Movie Title 1

 Star 1, Star 2, Star 3


 Movie Title 2

 Star 1, Star 2, Star 3


 Programmatically, I'd like something like:

 for each movie in rows

 h2move.title/h2

  ul

   for each star in movie.stars

   listar.name/li


 Before I write a function to pre-process the result, can someone tell me 
 if there is a helper for producing a result of this kind? 

 This thread addresses the same issue - 

 https://groups.google.com/forum/?fromgroups#!topic/web2py/GQsMt4qvqSs

 - but I was unable to discern the solution the question asker had 
 produced for himself, the key to which was this:

 web2py automatically add the many-to-many sets to an instance with the 

 same name of the relation table

 I do not know what that means. 

 -- 
  
  
  



-- 





Re: [web2py] Helper for many-to-many result sets?

2012-08-13 Thread Mike Girard
Yes, that was my plan, failing to locate something quicker and simpler. 
 The other thread I linked to suggested there was something.

I was unable to find a suitable example in the book. 

On Monday, August 13, 2012 7:10:29 PM UTC-4, Alec Taylor wrote:

 Just use a dictionary to do that. 

 Examples are in the book. 

 On Tue, Aug 14, 2012 at 8:59 AM, Mike Girard 
 mikegi...@gmail.comjavascript: 
 wrote: 
  I don't think you have understood my question. 
  
  My concern is not that I have too many fields nor my field names. 
  
  I am asking if there is an easy way to produce a data structure that 
 lends 
  itself to the output I want: 
  
  Foreach movie in movies 
  print movie.title 
  foreach stars in movie.stars 
 print star.name 
  
  
  
  On Monday, August 13, 2012 6:47:13 PM UTC-4, Alec Taylor wrote: 
  
  It sounds like you want to reduce the fields shown in the results, 
 since 
  you already know some of the information. 
  
  However the additional queries, though smaller, will require more 
  resources than a simple join. So I recommend just sticking with the 
 join, 
  and displaying in your view what you'd like. 
  
  As for the wrapper to give things easier to work with names, just 
 specify 
  said information in your controller. 
  
  On 14/08/2012 8:04 AM, Mike Girard mikegi...@gmail.com wrote: 
  
  I have a movie table that has a many-to-many relationship with a 
 person 
  table expressed through a star table. 
  
  A simplified version of my model: 
  
  db.define_table('movie', 
  
  Field('title','string'), 
  
   db.define_table('person', 
  
   Field('name', 'string', unique=True), 
  
  db.define_table('star', 
  
   Field('movie_id', db.movie), 
  
   Field('person_id', db.person), 
  
  I am able to create a select that joins the three tables and produces 
 a 
  result with all the data I need. 
  
  It's easy to iterate through the result and produce something akin to 
  this: 
  
  Movie Title  Star 1 
  
  Movie Title  Star 2 
  
  Movie Title  Star 3 
  
  
  What I want is: 
  
  Movie Title 1 
  
  Star 1, Star 2, Star 3 
  
  
  Movie Title 2 
  
  Star 1, Star 2, Star 3 
  
  
  Programmatically, I'd like something like: 
  
  for each movie in rows 
  
  h2move.title/h2 
  
   ul 
  
for each star in movie.stars 
  
listar.name/li 
  
  
  Before I write a function to pre-process the result, can someone tell 
 me 
  if there is a helper for producing a result of this kind? 
  
  This thread addresses the same issue - 
  
  https://groups.google.com/forum/?fromgroups#!topic/web2py/GQsMt4qvqSs 
  
  - but I was unable to discern the solution the question asker had 
  produced for himself, the key to which was this: 
  
  web2py automatically add the many-to-many sets to an instance with 
 the 
  
  same name of the relation table 
  
  I do not know what that means. 
  
  -- 
  
  
  
  
  -- 
  
  
  


-- 





[web2py] Trouble adding lxml module to application

2012-08-12 Thread Mike Girard
I am attempting to use lxml in my app. I am using Mac OS X. I placed the 
lxml files in web2py/Content/Resources/applications/myapp/modules.  

I attempted to import lxml into default.py with this:

from lxml import etree


I am getting a module-not-found error. 

I anticipated problems because lxml required some TLC just to get it 
installed and working on my Mac. 

While it would be helpful to know how to get it working from the 
application's local modules folder, my preference would be for web2py to 
have all the Python modules in the system site-packages directory available 
for import, rather than local copies.  Is there some way I can set this up?

-- 





[web2py] Re: Trouble adding lxml module to application

2012-08-12 Thread Mike Girard
Yes, I was using the web2py binary. I have since opted to run from source 
and it works with one stipulation:

Your instructions say to run from source use python2.5. lxml is installed 
in 2.6 on my machine. Running from source with python rather than python2.5 
seems to be fine and the module not found error went away. 

Is it ok to run web2py with python 2.6 or am I likely to encounter problems 
down the road? 

Thanks for your help.  

On Sunday, August 12, 2012 11:39:56 PM UTC-4, Mike Girard wrote:

 I am attempting to use lxml in my app. I am using Mac OS X. I placed the 
 lxml files in web2py/Content/Resources/applications/myapp/modules.  

 I attempted to import lxml into default.py with this:

 from lxml import etree


 I am getting a module-not-found error. 

 I anticipated problems because lxml required some TLC just to get it 
 installed and working on my Mac. 

 While it would be helpful to know how to get it working from the 
 application's local modules folder, my preference would be for web2py to 
 have all the Python modules in the system site-packages directory available 
 for import, rather than local copies.  Is there some way I can set this up?



-- 





[web2py] Problem with Field display in SQLForm.Smartgrid

2012-08-09 Thread Mike Girard
Hello:

I am building a movie database site with web2py. I have a movies table 
which has a many-to-many relationship with several other items: stars, 
directors and genres. I am using a SQLForm.Smartgrid declared in my 
controller like so: 

def manage_movies():
grid = SQLFORM.smartgrid(db.movie)
return locals()

Everything seems to be working fine -- all of the reference tables are 
showing up. However, when I click on, say 'Stars', which is the reference 
table joining db.movie with db.person, the Movie ID and Star Fields are 
oddly populated. Attached is a screenshot which should make this clear. 

https://lh6.googleusercontent.com/-eS6fJPgPv5k/UCMiWvEM_7I/AAc/NzibaZS3H_Y/s1600/Screen+shot+2012-08-08+at+10.35.52+PM.png

If I click edit for a Star record, everything is fine. There are dropdowns 
with the appropriate movie and star selected. 

I realize I haven't provided any of my model code. Was hoping someone might 
be able to quickly id the problem without all that.

Thanks for your help. 



-- 





[web2py] Just a quick eyeball on this, perhaps?

2012-08-09 Thread Mike Girard
Can anyone tell just by looking at this what the problem with the ID and 
Star fields might be? The record is actually fine. When I click Edit, 
everything is as it should be. 

Sorry for not providing the full background on this, for brevity's sake. 
Was wondering if someone can see the problem just by looking at this. 

https://lh6.googleusercontent.com/-eS6fJPgPv5k/UCMiWvEM_7I/AAc/NzibaZS3H_Y/s1600/Screen+shot+2012-08-08+at+10.35.52+PM.png

-- 





[web2py] Re: Just a quick eyeball on this, perhaps?

2012-08-09 Thread Mike Girard
I guess so, but it's fairly straightforward. Like I said, when I click on 
the actual record, the movie and star fields are correctly populated. 

On Thursday, August 9, 2012 4:18:25 PM UTC-4, howesc wrote:

 move_id is None because either the reference does not exist, or the record 
 it points to is represented as None

 star looks to be a Set, or the results of a query.  something has gone 
 wrong in the setup of the table or in the representation of that field.

 On Thursday, August 9, 2012 1:00:06 PM UTC-7, Mike Girard wrote:

 Can anyone tell just by looking at this what the problem with the ID and 
 Star fields might be? The record is actually fine. When I click Edit, 
 everything is as it should be. 

 Sorry for not providing the full background on this, for brevity's sake. 
 Was wondering if someone can see the problem just by looking at this. 


 https://lh6.googleusercontent.com/-eS6fJPgPv5k/UCMiWvEM_7I/AAc/NzibaZS3H_Y/s1600/Screen+shot+2012-08-08+at+10.35.52+PM.png



-- 





[web2py] Re: Just a quick eyeball on this, perhaps?

2012-08-09 Thread Mike Girard
Model:

db.define_table('movie',
Field('title','string'),
Field('api_id', 'string'),
Field('average_rating', 'double'),
Field('released', 'integer'),
Field('mpaa_rating', 'string'),
Field('synopsis', 'text'),
Field('short_synopsis', 'text'),
Field('created', 'datetime','datetime', default=request.now),
Field('modified', 'datetime','datetime', default=request.now),
format = '%(title)s')

db.define_table('person', 
 Field('name', 'string'),
 Field('api_id', 'string'),
 Field('birthday', 'date'),
 Field('created', 'datetime','datetime', default=request.now),
 Field('modified', 'datetime','datetime', default=request.now),
 format = '%(name)s')
 
db.define_table('star', 
 Field('movie_id', db.movie),
 Field('star', db.person),
 Field('created', 'datetime','datetime', default=request.now),
 Field('modified', 'datetime','datetime', default=request.now))

Grid code:

@auth.requires_login()
def manage_movies():
grid = SQLFORM.smartgrid(db.movie)
return locals()

Thanks for taking a look. Apologies if this is a really lame mistake. Trust 
that I do RTFM. 

On Thursday, August 9, 2012 6:16:58 PM UTC-4, Anthony wrote:

 Can you show the model and grid code?

 On Thursday, August 9, 2012 4:00:06 PM UTC-4, Mike Girard wrote:

 Can anyone tell just by looking at this what the problem with the ID and 
 Star fields might be? The record is actually fine. When I click Edit, 
 everything is as it should be. 

 Sorry for not providing the full background on this, for brevity's sake. 
 Was wondering if someone can see the problem just by looking at this. 


 https://lh6.googleusercontent.com/-eS6fJPgPv5k/UCMiWvEM_7I/AAc/NzibaZS3H_Y/s1600/Screen+shot+2012-08-08+at+10.35.52+PM.png



-- 





[web2py] Re: Just a quick eyeball on this, perhaps?

2012-08-09 Thread Mike Girard
I fixed it. My column names were screwed up. 

Sorry. 

On Thursday, August 9, 2012 11:20:31 PM UTC-4, Mike Girard wrote:

 Model:

 db.define_table('movie',
 Field('title','string'),
 Field('api_id', 'string'),
 Field('average_rating', 'double'),
 Field('released', 'integer'),
 Field('mpaa_rating', 'string'),
 Field('synopsis', 'text'),
 Field('short_synopsis', 'text'),
 Field('created', 'datetime','datetime', default=request.now),
 Field('modified', 'datetime','datetime', default=request.now),
 format = '%(title)s')

 db.define_table('person', 
  Field('name', 'string'),
  Field('api_id', 'string'),
  Field('birthday', 'date'),
  Field('created', 'datetime','datetime', default=request.now),
  Field('modified', 'datetime','datetime', default=request.now),
  format = '%(name)s')
  
 db.define_table('star', 
  Field('movie_id', db.movie),
  Field('star', db.person),
  Field('created', 'datetime','datetime', default=request.now),
  Field('modified', 'datetime','datetime', default=request.now))

 Grid code:

 @auth.requires_login()
 def manage_movies():
 grid = SQLFORM.smartgrid(db.movie)
 return locals()

 Thanks for taking a look. Apologies if this is a really lame mistake. 
 Trust that I do RTFM. 

 On Thursday, August 9, 2012 6:16:58 PM UTC-4, Anthony wrote:

 Can you show the model and grid code?

 On Thursday, August 9, 2012 4:00:06 PM UTC-4, Mike Girard wrote:

 Can anyone tell just by looking at this what the problem with the ID and 
 Star fields might be? The record is actually fine. When I click Edit, 
 everything is as it should be. 

 Sorry for not providing the full background on this, for brevity's sake. 
 Was wondering if someone can see the problem just by looking at this. 


 https://lh6.googleusercontent.com/-eS6fJPgPv5k/UCMiWvEM_7I/AAc/NzibaZS3H_Y/s1600/Screen+shot+2012-08-08+at+10.35.52+PM.png



-- 





[web2py] Search in Web2Py

2012-08-07 Thread Mike Girard
I notice that there is no search box on the main web2py site.  The search on 
the book site is somewhat unorthodox in that it only provides links where the 
search term is located but does not show you where. Does web2py have a robust, 
conventional search utility? If not, what is the prescribed method for getting 
around this? 

--