Github user paul-rogers commented on a diff in the pull request:
https://github.com/apache/drill/pull/906#discussion_r134297430
--- Diff:
exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
---
@@ -152,97 +157,75 @@ public void kill(boolean sendUpstream) {
}
}
- private void releaseAssets() {
- container.zeroVectors();
- }
-
- private void clearFieldVectorMap() {
- for (final ValueVector v : mutator.fieldVectorMap().values()) {
- v.clear();
- }
- }
-
@Override
public IterOutcome next() {
if (done) {
return IterOutcome.NONE;
}
oContext.getStats().startProcessing();
try {
- try {
- injector.injectChecked(context.getExecutionControls(),
"next-allocate", OutOfMemoryException.class);
-
- currentReader.allocate(mutator.fieldVectorMap());
- } catch (OutOfMemoryException e) {
- clearFieldVectorMap();
- throw UserException.memoryError(e).build(logger);
- }
- while ((recordCount = currentReader.next()) == 0) {
+ while (true) {
try {
- if (!readers.hasNext()) {
- // We're on the last reader, and it has no (more) rows.
- currentReader.close();
- releaseAssets();
- done = true; // have any future call to next() return NONE
-
- if (mutator.isNewSchema()) {
- // This last reader has a new schema (e.g., we have a
zero-row
- // file or other source). (Note that some sources have a
non-
- // null/non-trivial schema even when there are no rows.)
+ injector.injectChecked(context.getExecutionControls(),
"next-allocate", OutOfMemoryException.class);
+ currentReader.allocate(mutator.fieldVectorMap());
+ } catch (OutOfMemoryException e) {
+ clearFieldVectorMap();
+ throw UserException.memoryError(e).build(logger);
+ }
- container.buildSchema(SelectionVectorMode.NONE);
- schema = container.getSchema();
+ recordCount = currentReader.next();
+ Preconditions.checkArgument(recordCount >= 0,
+ "recordCount from RecordReader.next() should not be negative");
- return IterOutcome.OK_NEW_SCHEMA;
- }
- return IterOutcome.NONE;
- }
- // At this point, the reader that hit its end is not the last
reader.
+ boolean isNewRegularSchema = mutator.isNewSchema();
+ // We should skip the reader, when recordCount = 0 && !
isNewRegularSchema.
+ // Add/set implicit column vectors, only when reader gets > 0 row,
or
+ // when reader gets 0 row but with a schema with new field added
+ if (recordCount > 0 || isNewRegularSchema) {
+ addImplicitVectors();
+ populateImplicitVectors();
+ }
- // If all the files we have read so far are just empty, the
schema is not useful
- if (! hasReadNonEmptyFile) {
- container.clear();
- clearFieldVectorMap();
- mutator.clear();
- }
+ boolean isNewImplicitSchema = mutator.isNewSchema();
--- End diff --
The implicit schema will change only if the new file is in a different
directory than the previous file. The implicit columns are fixed (`filename`,
etc.) Only the `dir0`, `dir1` columns can change.
Does Drill allow combining files from different directory levels into a
single scan? If so, don't we have a trivial schema change problem? If the scan
decides to scan `a/b/c.csv` before, say, `a/b/d/e.csv`, then we get a trivial
schema change on the second file when we add the `dir2` column. Better analysis
up front of the collection of paths will avoid this problem.
If we avoid the `dirx` problem, then the implicit schema is constant for
all readers (the values of the columns, of course, differs), so the
`isNewImplicitSchema` logic can be dropped.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---