The main problem is that the truncation is done silently.  In my case, we 
were creating new data models with fields like string(1000) and just 
recently a team member noted the result in the database was varchar(255). 
 It violates the principle of least amazement :)  that a directive to 
create a data field created a field of a different size and didn't complain.

The ideal solution would be to detect the version of MySQL (via select 
@@version;) during define_table and set the max length based on the 
version.  (I thought it might be possible to query the MySQL environment 
variables (show variables;) to get the max length more directly, but that 
doesn't seem to exist; maybe someone else knows a way to get that.)

I suppose the same could be done for other DBMSs. And whatever the max 
length is for the particular DBMS / version, if the FIeld() call specifies 
a length longer than the max, return the same error as for other invalid 
field lengths (like -1).

Does this seem like the right approach?  I'd be happy to code & test this 
change and submit for the experts' review.



On Saturday, January 22, 2011 11:02:47 AM UTC-5, Jonathan Z. wrote:
>
> I have a number of fields specifying a length > 255 (some as large as 
> 2K).  On a recent schema migrate, web2py truncated all of these fields 
> to 255, resulting in substantial data loss. 
>
> My database is mysql.  I'm on version 1.91.6 of web2py.  I'm pretty 
> sure this has something to do with the new DAL.

-- 



Reply via email to