Hello,
I am writing the one spark application, it runs well but takes long
execution time can anyone help me to optimize my query to increase the
processing speed.
I am writing one application in which i have to construct the histogram and
compare the histograms in order to find the final
Hello,
I am writing the one spark application, it runs well but takes long
execution time can anyone help me to optimize my query to increase the
processing speed.
I am writing one application in which i have to construct the histogram and
compare the histograms in order to find the final
e to REPL expecting an integer (index to the Array)
> whereas "MAX(count)" was a String.
>
> What do you want to achieve ?
>
> On Tue, Apr 5, 2016 at 4:17 AM, Angel Angel <areyouange...@gmail.com>
> wrote:
>
>> Hello,
>>
>> i am writing one spar
Hello,
i am writing one spark application i which i need the index of the maximum
element.
My table has one column only and i want the index of the maximum element.
MAX(count)
23
32
3
Here is my code the data type of the array is
org.apache.spark.sql.Dataframe.
Thanks in advance.
Also please
Hello Sir/Madam,
I writing the spark application in spark 1.4.0.
I have one text file with the size of 8 GB.
I save that file in parquet format
val df2 =
sc.textFile("/root/Desktop/database_200/database_200.txt").map(_.split(",")).map(p
=> Table(p(0),p(1).trim.toInt, p(2).trim.toInt,
Hello Sir/Madam,
I am running one spark application having 3 slaves and one master.
I am wring the my information using the parquet format.
but when i am trying to read it got some error.
Please help me to resolve this problem.
code ;
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
Hello Sir/Madam,
I am running the spark-sql application on the cluster.
In my cluster there are 3 slaves and one Master.
When i saw the progress of my application in web UI hadoopm0:8080
I found that one of my slaves node is always in *LOADDING *mode.
Can you tell me what is that mean?
Also
Hello,
I have one table and 2 fields in it
1) item_id and
2) count
i want to add the count field as per item (means group the item_ids)
example
Input
itea_ID Count
500 2
200 6
500 4
100 3
200 6
Required Output
Result
Itea_id Count
500 6
200 12
100 3
I used the command the Resut=
Hello Sir/Madam,
I am try to sort the RDD using *sortByKey* function but i am getting the
following error.
My code is
1) convert the rdd array into key value pair.
2) after that sort by key
but i am getting the error *No implicit Ordering defined for any *
[image: Inline image 1]
thanks
Hello Sir/Madam,
I am writing one application using spark sql.
i made the vary big table using the following command
*val dfCustomers1 =
sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p =>
Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt, p(3)))toDF*
Now i want to search the
Hello Sir/Madam,
I am using the spark sql for the data operation.
I have two tables with the same fields.
Table 1
name address phone Number
sagar india
Table 2
name address phone Number
jaya india 222
I want to join this tables like the following way
Result Table
Hello Sir/Madam,
I am working on the deep learning using spark.
I have implemented some algorithms using spark.
bot now i want to used the imageNet database in spark-1.0.4..
Can you give me some guideline or reference so i can handle the imageNet
database.
Thanking You,
Sagar Jadhav.
hello,
I am running the spark application.
I have installed the cloudera manager.
it includes the spark version 1.2.0
But now i want to use spark version 1.4.0.
its also working fine.
But when i try to access the HDFS in spark 1.4.0 in eclipse i am getting
the following error.
"Exception in
-- Forwarded message --
From: Angel Angel <areyouange...@gmail.com>
Date: Wed, Sep 23, 2015 at 12:24 PM
Subject: Executor lost
To: user@spark.apache.org
Hello Sir/Madam,
I am running the deeplearning example on spark.
i have the following configuration
1 Master and 3 slav
Hello,
Hi,
I am running some deep learning algorithm on spark.
Example:
https://github.com/deeplearning4j/dl4j-spark-ml-examples
i am trying to run this example in local mode and its working fine.
but when i try to run this example in cluster mode i got following error.
Loaded Mnist dataframe:
Respected sir,
I installed two versions of spark 1.2.0 (cloudera 5.3) and 1.4.0.
I am running some application that need spark 1.4.0
The application is related to deep learning.
*So how can i remove the version 1.2.0 *
*and run my application on version 1.4.0 ?*
When i run command
18 matches
Mail list logo