Re: [web2py] Bug With CSV Export and import

2012-10-09 Thread Simon Lukell
In my experience you're better off using the native database backup/restore 
mechanism if you want to retain IDs.

web2py's csv import keeps references but with new sequential IDs as long as you 
include all the relevant tables in the export. from memory, if you don't want 
the actual data of a referenced table to be imported you can limit its model to 
contain no fields at all.

(IDs changing is normal for portable databases since IDs are different types 
depending on the backend. The trick is to never use them in application code or 
expose them in public URLs email links etc.)

-- 





Re: [web2py] To uuid or not?

2012-05-14 Thread Simon Lukell




 As I understand it the id's get rebuilt if you export a table as csv-file 
 and import it on another computer.  When the whole database gets exported 
 and imported the relationships stays consistent.

  
Yes, I've learned this the hard way. Moving the database between computers 
is OK if you use the native database backup method, but you definitely lose 
database portability via DAL if you expose IDs in URLs.  We don't use UUIDs 
for URLs, but rather hashes that can be truncated (and possibly 
base64-encoded (URL-safe)) to keep the values brief.


[web2py] Re: ATTENTION before upgrading to trunk....

2012-04-27 Thread Simon Lukell
Anyone storing Facebook IDs will hit that limit, so as a PostgreSQL shop 
developing lots of FB apps, we can't wait for this to be released.  

On Friday, 27 April 2012 04:40:06 UTC+10, villas wrote:

 Thinking more about this,  an INTEGER will generally store over 2 billion 
 records.  Is the reason for updating to BIGINT due to someone having hit 
 that limit?  

 An inner voice is saying:  Can't the person who has breached that limit 
 be asked to create his own tables instead of giving all the rest of us the 
 inconvenience?.  I shall now replace my tin-foil hat.



[web2py] Re: ATTENTION before upgrading to trunk....

2012-04-27 Thread Simon Lukell
Anyone storing Facebook IDs will hit that limit, so as a PostgreSQL shop 
developing lots of FB apps, we can't wait for this to be released.

Should have some test results on Monday


On Friday, 27 April 2012 04:40:06 UTC+10, villas wrote:

 Thinking more about this,  an INTEGER will generally store over 2 billion 
 records.  Is the reason for updating to BIGINT due to someone having hit 
 that limit?  

 An inner voice is saying:  Can't the person who has breached that limit 
 be asked to create his own tables instead of giving all the rest of us the 
 inconvenience?.  I shall now replace my tin-foil hat.



[web2py] Re: import_from_csv_file broken

2012-02-16 Thread Simon Lukell
not sure if there is a difference in the resulting file, but I usually
use db.category.export_to_csv_file(open(...))

On Feb 16, 3:24 pm, Richard Penman richar...@gmail.com wrote:
 I exported a table from sqlite with:
 open(filename, 'w').write(str(db(db.category.id).select()))

 Output file looks as expected.

 And then I tried importing into postgres with:
 db.category.import_from_csv_file(filename)

 Each row was inserted but all values are NULL.
 Any ideas?

 Version 1.99.4


[web2py] Re: DAL speed - an idea

2012-02-10 Thread Simon Lukell
+1
Having this option would make it really simple to change between the
full-blown DAL result set and a faster stripped down one (which could
then be adapted with the processor to keep the rest of the code
working.)

 I've been thinking about something like this as well. Instead of a separate
 select_raw() method, maybe we can just add a raw=True|False argument to the
 existing select() method.


[web2py] Re: DAL Limitations with aggregate and union queries

2012-02-05 Thread Simon Lukell
As a 'bystander', I personally think that Niphlod's response is of
such good quality that the gist of it deserves inclusion in the book.


[web2py] Re: Python / web2py or.... the learning path?

2012-01-18 Thread Simon Lukell
Does anyone have an opinion about the web2py video lectures from
codeschool.org ? I had a look at the first video and I thought it was
quite a nice intro:

http://www.youtube.com/playlist?list=PL978B2CE2D788F745


[web2py] Re: Development of the Framework

2011-12-23 Thread Simon Lukell
I also did Python first, web second and was fortunate enough to have
the time to compare pretty much every single framework out there.  The
main reasons web2py is my preferred framework:

- it is lean and easy to understand 'all the way down'
- this means you are not forced into doing anything the $FRAMEWORK way
- it can run anywhere (even embedded and against almost any database
engine. I'm running all our apps in dev on my phone)
- it is extremely productive to develop with
- the documentation and community support is outstanding (and as I
said, if needed, just read the source in gluon -it is awesome)

In short, we achieve the best mix of freedom (implementation and
deployment), productivity and support, which makes most tech sense to
me and business sense to our agency.

With a Python programming background, I agree that the criticisms
against web2py indeed lack merit when actually having a deeper look.
In fact, the way web2py works at its core makes a lot of sense to me
in a web context.


[web2py] Re: web2py 1.99.3 is OUT

2011-12-10 Thread Simon Lukell

 We could not call it 1.100 because it would break the old automatic
 upgrade mechanism (1 comes before 9, yikes).

Why not make the upgrade from old version a two-step process?  Or is
this too high a price to pay for being absolutely clear that the API
not been broken?

I know bumping major versions is all the rage, but isn't backward
compatibility one of the strongest selling points of web2py?