[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15536866#comment-15536866 ] Sergey Shelukhin commented on HIVE-13098: - Not sure how that config is relevant? Different columns could have different widths, etc. Sqoop could have a config that would ensure that data fits in each column, but I don't think it does. > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15536856#comment-15536856 ] Gopal V commented on HIVE-13098: bq. However that would only work in queries, not for automated pipelines/writers. There's already a config for this problem for SQOOP, right? {{sqoop.bigdecimal.format.string}}? > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15536835#comment-15536835 ] Sergey Shelukhin commented on HIVE-13098: - Well, the crux of the matter is, whatever solution we do in Hive would be super unwieldy code-wise, because decimals, decimal OIs, etc. are created in 100 places in giant static methods. I was going to add a global for compilation (thread-local since that is single threaded) to be able to populate it everywhere. At runtime (or during import) it can be used from the fields in runtime objects (see DecimalUdf interface in the patch and its usage). Then we can choose what to do with it. The question is whether to do it at all, esp. since as Gopal noted other types are also converted to null on overflow (unless they are bugged like decimal-to-int cast ;)). > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15536749#comment-15536749 ] Matt McCline commented on HIVE-13098: - There are other industry solutions. E.g. Greenplum added an ERROR TABLE feature a long time ago for saving rejected rows so they could be cleaned and add later. Also, see Teradata. > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15536662#comment-15536662 ] Sergey Shelukhin commented on HIVE-13098: - [~mmccline] the main concern here is automated nulls, where people get them after they import a large amount of data. If the ETL runs every day they cannot be expected to look at the data every day (arguably it should be cleaned up by someone else before Hive in this case, but people make mistakes and there are bugs in other code...) One way to handle this for most cases would be to break the existing behavior to always throw, and add a separate UDF ("trycast"?) for people who don't care. However that would only work in queries, not for automated pipelines/writers. > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535333#comment-15535333 ] Matt McCline commented on HIVE-13098: - [~hagleitn] Perhaps a different approach. > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535331#comment-15535331 ] Matt McCline commented on HIVE-13098: - What if we added some function(s) to help people explore their data? What about a function that takes a column value or expression and a target data type and reports on how that conversion would go. For example, for string to int, it could report: string doesn't parse to a number, string has decimal digits that would be thrown away, number parses but would overflow an int For string to decimal, it could report: (parse errors) Integer digits will not fit in decimal precision Decimal digits would require rounding given the precision. We could even go further and have function(s) that examine a string column and speculate on good possible data types that would be appropriate for a conversion. We could borrow ideas from the schema discovery folks (Drill?). > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534325#comment-15534325 ] Sergey Shelukhin commented on HIVE-13098: - I will stop working on this for now cause it;s a giant annoying time sink. If there are no objections to threadlocal I will go with that. > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534285#comment-15534285 ] Sergey Shelukhin commented on HIVE-13098: - Upon looking further on OI path I don't think it's possible to propagate it there without major changes, in fact OI-related parts of this patch are not valid, since OIs are assumed to be stateless and are cached process-wide, ditto for TypeInfo-s. There are lots of static method paths accessing those... I think I might scrape a lot of the patch and add a globally accessible static that would have to be initialize on CLI/HS2/task startup.. The only exception would be write path that happens outside of Hive services... This will reduce size of the patch a lot (but also make it a global setting not modifiable per query...) [~ashutoshc] [~hagleitn] [~jdere] opinions? > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13098.WIP.patch, HIVE-13098.WIP2.patch > > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533799#comment-15533799 ] Hive QA commented on HIVE-13098: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12830837/HIVE-13098.WIP2.patch {color:green}SUCCESS:{color} +1 due to 57 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 97 failed/errored test(s), 10645 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_1] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_5] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_precision] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_skewjoin] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_stats] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_decimal] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_decimal] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_format_number] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_greatest] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_least] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_to_byte] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_to_long] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_to_short] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_aggregate_9] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_in] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_cast_constant] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_1] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_2] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_3] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_aggregate] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_precision] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_udf] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_join_part_col_char] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_struct_in] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_0] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_13] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_17] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_short_regress] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[tez_union_decimal] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_aggregate_9] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_between_in] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_cast_constant] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_char_mapjoin1] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_2] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_3] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_aggregate] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_precision] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_decimal_udf] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_inner_join] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_interval_mapjoin] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_join_filters] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_left_outer_join2] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_left_outer_join] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_leftsemi_mapjoin] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_mapjoin_reduce] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join0] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join1] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join2] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join3] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join4] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_outer_join5] org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[vector_out
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528392#comment-15528392 ] Hive QA commented on HIVE-13098: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12830622/HIVE-13098.WIP.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1325/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1325/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-Build-1325/ Messages: {noformat} This message was trimmed, see log for full details [copy] Copying 15 files to /data/hive-ptest/working/apache-github-source-source/itests/custom-udfs/udf-vectorized-badexample/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ udf-vectorized-badexample --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.19.1:test (default-test) @ udf-vectorized-badexample --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ udf-vectorized-badexample --- [INFO] Building jar: /data/hive-ptest/working/apache-github-source-source/itests/custom-udfs/udf-vectorized-badexample/target/udf-vectorized-badexample-2.2.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ udf-vectorized-badexample --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ udf-vectorized-badexample --- [INFO] Installing /data/hive-ptest/working/apache-github-source-source/itests/custom-udfs/udf-vectorized-badexample/target/udf-vectorized-badexample-2.2.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-it-custom-udfs/udf-vectorized-badexample/2.2.0-SNAPSHOT/udf-vectorized-badexample-2.2.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-github-source-source/itests/custom-udfs/udf-vectorized-badexample/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-it-custom-udfs/udf-vectorized-badexample/2.2.0-SNAPSHOT/udf-vectorized-badexample-2.2.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Integration - HCatalog Unit Tests 2.2.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-hcatalog-it-unit --- [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/target [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ hive-hcatalog-it-unit --- [INFO] [INFO] --- maven-antrun-plugin:1.7:run (download-spark) @ hive-hcatalog-it-unit --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-hcatalog-it-unit --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-hcatalog-it-unit --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-hcatalog-it-unit --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-hcatalog-it-unit --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-hcatalog-it-unit --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hcatalog-it-unit --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/target/tmp/conf [copy] Copying 15 files to /data/hive-ptest/working/apache-github-source-source/itests/hcatalog-unit/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-hcatalog-it-unit --- [INFO] Compiling 8 sour
[jira] [Commented] (HIVE-13098) Add a strict check for when the decimal gets converted to null due to insufficient width
[ https://issues.apache.org/jira/browse/HIVE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15317581#comment-15317581 ] Sergey Shelukhin commented on HIVE-13098: - This is especially problematic for implicit conversions... > Add a strict check for when the decimal gets converted to null due to > insufficient width > > > Key: HIVE-13098 > URL: https://issues.apache.org/jira/browse/HIVE-13098 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > When e.g. 99 is selected as decimal(5,0), the result is null. This can be > problematic, esp. if the data is written to a table and lost without the user > realizing it. There should be an option to error out in such cases instead; > it should probably be on by default and the error message should instruct the > user on how to disable it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)