>Thank you, the problem is our plan was to have an App that DOES NOT need to 
>know the DB schema to work,
>so the insert statements were planned to just post data to the DB and let the 
>DB handle the trimming. 
>That way the App is generic and works regardless of DB changes. 
>
>My idea of recompiling the FB client was to "capture" any insert statement on 
>the customers side, and 
>replace it with a trimmed insert statement. I don't see an issue with that. I 
>am not sure why you think
>it is not possible. Alternatively we could capture the insert statements on 
>the server side, and 
>replace the original insert with a trimmed sentence using the DB schema to 
>know the size to use for 
>each field. I was hopping to be able to resolve it without a re-compile of the 
>DB engine, but it seems 
>I was wrong.
>
>Regarding the DB doing it's job in checking and protecting against arithmetic 
>overflow, I still think
>the way it is done is a Bug as it is actually checking the size of a field 
>that is NOT what is being 
>inserted because the trigger has actually changed already the size of the 
>field to be inserted, so the
>DB is actually checking something that is irrelevant. I don't know if it was 
>designed to be that way
>for a reason, or if it was something that could be "re-designed" to compare 
>the field actually being
>inserted after all triggers have being applied. Under the current model, what 
>would happened if we have
>a trigger before insert to update the field in a way that it does not feet 
>any-more into the target 
>table? The checking and validation was done before the trigger, so I guess it 
>did not pick up the issue
>and who know what was actually inserted?

Hi Fabian!

I don't think telling the user that the field is too long to handle can be 
considered a bug. Sure, it would be nice if Firebird had infinitely sized 
buffers and didn't check the size until after the BEFORE INSERT trigger had 
finished, but it doesn't. First, Firebird receives data, then BEFORE triggers 
are executed, then the data is stored in the database. The error that you see 
occurs before the BEFORE INSERT trigger.

Your problem is still solvable. Simply add another table. Let one table have 
huge field lengths (CHAR/VARCHAR or BLOB SUBTYPE TEXT) and use this only for 
imports, then use AFTER INSERT triggers to copy the first few characters of 
field(s) to the other table (which you in turn use for querying). Variations of 
this solution includes adding another field rather than table or even another 
database ('another database' would probably be an overkill and more difficult 
to implement in triggers).

HTH,
Set

Reply via email to