reductionista commented on a change in pull request #462: DL: Add asymmetric 
cluster support for fit and evaluate
URL: https://github.com/apache/madlib/pull/462#discussion_r352940338
 
 

 ##########
 File path: 
src/ports/postgres/modules/deep_learning/madlib_keras_fit_multiple_model.sql_in
 ##########
 @@ -63,11 +63,13 @@ CREATE OR REPLACE FUNCTION 
MADLIB_SCHEMA.fit_transition_multiple_model(
     model_architecture         TEXT,
     compile_params             TEXT,
     fit_params                 TEXT,
+    dist_key                   INTEGER,
+    dist_key_mapping           INTEGER[],
     current_seg_id             INTEGER,
-    seg_ids                    INTEGER[],
-    images_per_seg             INTEGER[],
-    gpus_per_host              INTEGER,
     segments_per_host          INTEGER,
+    images_per_seg             INTEGER[],
+    use_gpus                   BOOLEAN,
+    gpus_per_seg               INTEGER[],
 
 Review comment:
   I notice you changed `gpus_per_host` to `gpus_per_seg` everywhere.  I get 
that the size of this array is the number of segments, but it bothers me that 
we're referring to the number of gpus per host as gpus_per_seg. 
   
    To me, gpus_per_seg should only refer to the number of gpus per segment, ie 
gpus_per_host / segments_per_host.  Each element of this array still represents 
the gpus_per_host seen from that segment, so I think `gpus_per_host` would be 
better.  But maybe there is a 3rd alternative that's better than either of 
these?  
   
   Personally, I'd be perfectly fine with just calling it `gpus_per_host`, but 
I admit it could also potentially lead to some confusion.
   
   See further comments later on for more...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to