tustvold commented on issue #5410:
URL: https://github.com/apache/arrow-rs/issues/5410#issuecomment-1962510993

   How about
   
   ```
   fn write_null<T, E>(typed: &mut ColumnWriterImpl<'_, T>) -> Result<(), Error>
   where
       T: DataType,
   {
       // Note: this isn't actually writing a null as to do so it would need to 
specify definition levels
       typed.write_batch(&[], Some(&[0]), None).map_storage_err()
   }
   ```
   
   As an aside writing values one at a time will be incredibly slow, you might 
want to consider collecting 1000 or more values for a column into a `Vec` or 
arrow array, and writing that in one go. Converting to an arrow representation 
would have the advantage of eliminating the need to understand the dremel 
record shredding that parquet uses, and open the possibility of using arrow's 
vectorised kernels and IO abstractions elsewhere in gluesql.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to