amaliujia commented on code in PR #38460:
URL: https://github.com/apache/spark/pull/38460#discussion_r1010084257


##########
python/pyspark/sql/connect/client.py:
##########
@@ -145,6 +145,39 @@ def _build_metrics(self, metrics: "pb2.Response.Metrics") 
-> typing.List[PlanMet
     def sql(self, sql_string: str) -> "DataFrame":
         return DataFrame.withPlan(SQL(sql_string), self)
 
+    def range(
+        self,
+        start: int,
+        end: int,
+        step: Optional[int] = None,

Review Comment:
   hmmm we are not marking `step` as required because Scala side implementation 
does not treat it as required and thus it has also a default value.
   
   ```
     private def transformRange(rel: proto.Range): LogicalPlan = {
       val start = rel.getStart
       val end = rel.getEnd
       val step = if (rel.hasStep) {
         rel.getStep.getStep
       } else {
         1
       }
       val numPartitions = if (rel.hasNumPartitions) {
         rel.getNumPartitions.getNumPartitions
       } else {
         session.leafNodeDefaultParallelism
       }
       logical.Range(start, end, step, numPartitions)
     }
   ```
   
   Same for `numPartitions`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to