> Regarding the DB doing it's job in checking and protecting against arithmetic 
> overflow, I still think the way it is done is a Bug as it is actually 
> checking the size of a field that is NOT what is being inserted because the 
> trigger has actually changed already the size of the field to be inserted, so 
> the DB is actually checking something that is irrelevant. I don't know if it 
> was designed to be that way for a reason, or if it was something that could 
> be "re-designed" to compare the field actually being inserted after all 
> triggers have being applied. Under the current model, what would happened if 
> we have a trigger before insert to update the field in a way that it does not 
> feet any-more into the target table? The checking and validation was done 
> before the trigger, so I guess it did not pick up the issue and who know what 
> was actually inserted?
> 
> Regards,
> Fabian


Hi,

You are wrong 
think about float field not varchar
and you client put in insert statement data 'blah blah blah'
do you really think that this should be transferred to db context trigger? How 
you then declare vars in trigger - maximum possible varchar size, blob...

This is client side context to check this if not then db return error.
The problem is that you not tell your customers what is accepted format for 
string data in your system.

If they create wrong statement they should get error response and finish.



regards,
Karol Bieniaszewski




Reply via email to