Hi all, I wanted to try spark 1.0.0, because of the new SQL component. I have cloned and built the latest from git. But the examples described here do not work anymore:
http://people.apache.org/~pwendell/catalyst-docs/mllib-classification-regression.html#binary-classification-2 I get the following error: /home/ec2-user/spark/python/pyspark/mllib/_common.pyc in _get_initial_weights(initial_weights, data) 313 def _get_initial_weights(initial_weights, data): 314 if initial_weights is None: --> 315 initial_weights = _convert_vector(data.first().features) 316 if type(initial_weights) == ndarray: 317 if initial_weights.ndim != 1: AttributeError: 'numpy.ndarray' object has no attribute 'features' not sure what type is intended as an input, any help would be appreciated. thanks, -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/pyspark-MLlib-examples-don-t-work-with-Spark-1-0-0-tp6546.html Sent from the Apache Spark User List mailing list archive at Nabble.com.