Tim Armstrong has posted comments on this change.

Change subject: IMPALA-4674: Part 1: port BufferedTupleStream to BufferPool
......................................................................


Patch Set 4:

(37 comments)

http://gerrit.cloudera.org:8080/#/c/5811/4/be/src/runtime/buffered-tuple-stream-v2.cc
File be/src/runtime/buffered-tuple-stream-v2.cc:

Line 326:       LOG(INFO) << "Failed to get reservation for " << 
pages_.front().len();
> I need to remove this log message.
Done


http://gerrit.cloudera.org:8080/#/c/5811/4/be/src/runtime/buffered-tuple-stream-v2.h
File be/src/runtime/buffered-tuple-stream-v2.h:

PS4, Line 41:  or scratch disk storage
> what does this mean? isn't it always backed by BufferPool (which  may spill
Reworded to reflect that.


PS4, Line 45: the APIs are called from a single thread
> this is saying per BufferTupleStream object, right? maybe clarify that
Done


Line 46: ///
> it'd be nice to explain the point of this abstraction a bit more explicitly
Done


PS4, Line 49: pinned or unpinned
> I think these terms need to be defined relative to the stream as far as wha
Done


PS4, Line 59: page size
> wouldn't it also be a function of tuple size (i.e. tuples per page)?
I don't think this added much - the previous sentence better describes the 
number of null indicators.


PS4, Line 77: pages
> pages' buffers.
Done


PS4, Line 80: location in memory
> or you could say "buffer".
Done


PS4, Line 110: in pages_
> in the stream
Done


PS4, Line 111: in pages_
> same
Done


PS4, Line 114: (if read_write is
             : ///   true, then two pages need to be in memory).
> rather than saying this in parenthesis, i think we should pull it up to say
Reworded this part to say that we only need enough reservation for a single 
read/write buffer.


PS4, Line 117: caller's reservation is insufficient
> does it first try to increase the reservation at this point?
Done


Line 124: /// returned so far from the stream may be freed on the next call to 
GetNext().
> do we have a jira we can reference as a TODO here to simplify this?
Done


Line 126: /// Manual construction of rows with AllocateRow():
> not for now, but is it possible to unify things so we only have one way to 
Updated to make it clearer that it's called instead of AddRow().


Line 138: ///     this array as new rows are inserted in the page. If we do so, 
then there will be
> is that still something we want to do?
No longer relevant with the changes I made to the null indicator layout.


PS4, Line 146: id
> what is 'id'? is this index?
Done


PS4, Line 147: With 8MB pages that is 512GB per stream.
> where does this limit come into play?  is this something that will cause tr
I believe we only use RowIdx currently if the right side of a join has multiple 
tuples per row.

I reworked this to drastically simplify it. Instead of RowIdx we just return a 
pointer to the start of the data. This avoids all the problems with limits here.


PS4, Line 186: page_len: the size of pages to use in the stream
> what does that mean for large rows?
We need something like a default and max page length. Added a TODO.


PS4, Line 198: of
> off
Done


PS4, Line 203: AddRow
> is it also used before AllocateRow()?
Done


Line 207:   ///     if an error status is returned.
> broken line break
Done


Line 212:   /// sets 'status' to an error if an error occurred.
> hard to parse that sentence.  but, can we simplify this dance now that we h
Updated this comment to be clearer about the possible scenarios.

It's guaranteed to succeed for now. It's actually a bit more complicated for 
variable-length pages since that would rely on the client leaving enough slack 
in the reservation to read the largest-possible page.


Line 223:   /// tuples.
> same questions. let's think of this interface can be simplified now.
Updated to refer back to the AddRow() comment.


Line 228:   /// been allocated with the stream's row desc.
> is the row valid only as long as the stream remains pinned?
Done


PS4, Line 232: begin reading
> before the first GetNext()?
Done


Line 236:   ///     false if the page could not be pinned and no error was 
encountered.
> can we explain that in terms of reservations? and like questions above, doe
Done


PS4, Line 240: enough memory
> what does that mean in terms of reservations?  and does it still make sense
Done


Line 253:   };
> this seems like a wrinkle to the class comment. can it be succinctly descri
I think it now covers it (it mentions that it only pins the pages being read 
and written in unpinned mode)


PS4, Line 259: must be copied out by the caller
> this seems to contradict the class comment about "needs_deep_copy"
Updated the comment.


PS4, Line 267: *got_rows is false if the stream could not be pinne
> does that still make sense with reservations?  if so, it should be expresse
Reworded


PS4, Line 271: date
> data?
Done


PS4, Line 271: buffers containing page
> are there buffers that don't contain page data?
Reworded, that was confusing.


PS4, Line 286: Returns the byte size of the stream that is currently pinned in 
memory.
> Is this trying to say:
Done


http://gerrit.cloudera.org:8080/#/c/5811/4/be/src/runtime/query-state.cc
File be/src/runtime/query-state.cc:

Line 100:   if (ExecEnv::GetInstance()->buffer_pool() != nullptr) {
> Use exec-env directly.
Done


http://gerrit.cloudera.org:8080/#/c/5811/4/be/src/runtime/query-state.h
File be/src/runtime/query-state.h:

Line 29: #include "runtime/tmp-file-mgr.h" // TODO: can we avoid this?
> Remove TODO
Done


Line 121:   Status CheckSpillingEnabled();
> This is currently dead code - can remove it.
Done


http://gerrit.cloudera.org:8080/#/c/5811/4/be/src/runtime/row-batch.h
File be/src/runtime/row-batch.h:

Line 442:   /// TODO: ownership semantics
> Need to fix this TODO
Done


-- 
To view, visit http://gerrit.cloudera.org:8080/5811
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I7bb47a818564ac19a6be4ca88db0578f4ea0b709
Gerrit-PatchSet: 4
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-Owner: Tim Armstrong <tarmstr...@cloudera.com>
Gerrit-Reviewer: Dan Hecht <dhe...@cloudera.com>
Gerrit-Reviewer: Tim Armstrong <tarmstr...@cloudera.com>
Gerrit-HasComments: Yes

Reply via email to