You can make use of probability vector from spark classification.
When you run spark classification model for prediction, along with
classifying into its class spark also gives probability vector(what's the
probability that this could belong to each individual class) . So just take
the probability
ng it. The GPU branch has instructions how to set up such compute
> environment:
>
>
>
> https://github.com/Azure/mmlspark/tree/gpu#gpu-vm-setup
>
>
>
> Cheers,
>
> Roope – Microsoft Cloud AI Team
>
>
>
> *From:* hosur narahari [mailto:hnr1...@gmail.com]
&
Hi Roope,
Does this mmlspark project uses GPGPU for processing and just CPU cores
since DL models are computationally very intensive.
Best Regards,
Hari
On 6 Jul 2017 9:33 a.m., "Gaurav1809" wrote:
> Thanks Roope for the inputs.
>
> On Wed, Jul 5, 2017 at 11:41 PM,
Tensorflow provides NLP implementation which uses deep learning technology.
But it's not distributed. So you can try to integrate spark with
Tensorflow.
Best Regards,
Hari
On 11 Apr 2017 11:44 p.m., "Gabriel James"
wrote:
> Me too. Experiences and recommendations
Use flatmap function on JavaRDD
On 5 Apr 2017 3:13 p.m., "Hamza HACHANI" wrote:
> I want to convert a JavaRDD to JavaRDD. For example
> if there is 3 elment in List 3 Object would be created in my new
> JavaRDD.
>
> Does any one have an idea ?
>
Try lit(fromDate) and lit(toDate). You've to import
org.apache.spark.sql.functions.lit to use it
On 31 Mar 2017 7:45 a.m., "shyla deshpande"
wrote:
The following works
df.filter($"createdate".between("2017-03-20", "2017-03-22"))
I would like to pass variables