houqp commented on a change in pull request #55:
URL: https://github.com/apache/arrow-datafusion/pull/55#discussion_r622398455
##########
File path: datafusion/src/logical_plan/plan.rs
##########
@@ -141,7 +137,7 @@ pub enum LogicalPlan {
/// Produces rows from a table provider by reference or from the context
TableScan {
/// The name of the table
- table_name: String,
+ table_name: Option<String>,
Review comment:
In SQL this would never happen. It's more of a thing in spark land where
for simple queries, people usually just load a csv or parquet partition into a
dataframe without any table registration. So for example, for dataframe users,
they could start with a `Context.read_csv` call and only reference columns by
unqualified names in subsequent transformations.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]