seddonm1 commented on a change in pull request #8966:
URL: https://github.com/apache/arrow/pull/8966#discussion_r546304152



##########
File path: rust/datafusion/tests/sql.rs
##########
@@ -1826,3 +1826,21 @@ async fn csv_between_expr_negated() -> Result<()> {
     assert_eq!(expected, actual);
     Ok(())
 }
+
+#[tokio::test]
+async fn string_expressions() -> Result<()> {
+    let mut ctx = ExecutionContext::new();
+    register_aggregate_csv(&mut ctx)?;
+    let sql = "SELECT
+        char_length('josé') AS char_length
+        ,character_length('josé') AS character_length
+        ,lower('TOM') AS lower
+        ,upper('tom') AS upper
+        ,trim(' tom ') AS trim

Review comment:
       @andygrove copying you in due to decision:
   
   I have now added the `NULL` value to both the test cases and the planner.
   
   This is where things get interesting. For this statement:
   
   ```sql
   SELECT NULL
   ```
   
   Spark implements a special `NullType` for this return type but that creates 
a lot of side effects for things like the Parquet writer and JDBC drivers do 
not support this type.
   
   I tested Postgres:
   
   ```sql
   CREATE TABLE test AS
   SELECT NULL;
   ```
   
   The DDL for this table shows that column as a `text` type so that is why I 
have applied the default `utf8` type to `Value(Null)`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to