Hello Sir/Madam,

I am writing one application using spark sql.

i made the vary big table using the following command

*val dfCustomers1 =
sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p =>
Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt, p(3)))toDF*


Now i want to search the address(many address)  fields in the table and
then extends the new table as per the searching.

*var k = dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))*



*for( a <-1 until 1500) {*

*     | var temp=
dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))*

*     |  k = temp.unionAll(k)*

*}*

*k.show*




But this is taking so long time. So can you suggest me the any optimized
way, so i can reduce the execution time.


My cluster has 3 slaves and 1 master.


Thanks.

Reply via email to