reductionista commented on a change in pull request #440: DL: Update
training_preprocessor_dl to use bytea
URL: https://github.com/apache/madlib/pull/440#discussion_r322423035
##########
File path:
src/ports/postgres/modules/deep_learning/input_data_preprocessor.py_in
##########
@@ -176,15 +176,25 @@ class InputDataPreprocessorDL(object):
distributed_by_clause = ''
else:
distributed_by_clause= ' DISTRIBUTED BY (buffer_id) '
+ dep_shape_col = add_postfix(
+ MINIBATCH_OUTPUT_DEPENDENT_COLNAME_DL, "_shape")
+ ind_shape_col = add_postfix(
+ MINIBATCH_OUTPUT_INDEPENDENT_COLNAME_DL, "_shape")
sql = """
CREATE TABLE {self.output_table} AS
- SELECT * FROM
+ SELECT {self.schema_madlib}.convert_array_to_bytea({x}) AS {x},
+ {self.schema_madlib}.convert_array_to_bytea({y}) AS {y},
+ array_dims({x}) AS {ind_shape_col},
+ array_dims({y}) AS {dep_shape_col},
Review comment:
If you use something like `array_upper({x}, 1), array_upper({x}, 2),
array_upper({x}, 3)` then `independent_var_shape` and `dependent_var_shape`
will come out as INTEGER[] instead of TEXT.
Not sure if this is worth changing, but it seems like a more natural way of
storing it, and would allow python to read it directly during training instead
of having to parse a string into an array.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services