Github user vijaykiran commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10602#discussion_r48839934
  
    --- Diff: python/pyspark/mllib/fpm.py ---
    @@ -130,15 +133,22 @@ def train(cls, data, minSupport=0.1, 
maxPatternLength=10, maxLocalProjDBSize=320
             """
             Finds the complete set of frequent sequential patterns in the 
input sequences of itemsets.
     
    -        :param data: The input data set, each element contains a sequnce 
of itemsets.
    -        :param minSupport: the minimal support level of the sequential 
pattern, any pattern appears
    -            more than  (minSupport * size-of-the-dataset) times will be 
output (default: `0.1`)
    -        :param maxPatternLength: the maximal length of the sequential 
pattern, any pattern appears
    -            less than maxPatternLength will be output. (default: `10`)
    -        :param maxLocalProjDBSize: The maximum number of items (including 
delimiters used in
    -            the internal storage format) allowed in a projected database 
before local
    -            processing. If a projected database exceeds this size, another
    -            iteration of distributed prefix growth is run. (default: 
`32000000`)
    +        :param data:
    +          The input data set, each element contains a sequnce of itemsets.
    +        :param minSupport:
    +          The minimal support level of the sequential pattern, any pattern 
appears
    +          more than  (minSupport * size-of-the-dataset) times will be 
output.
    +          default: `0.1`)
    --- End diff --
    
    I think the format should be (default: `0.1`).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to