Dears,
I needs to commit DB Transaction for each partition,Not for each row.
below didn't work for me.
rdd.mapPartitions(partitionOfRecords = {
DBConnectionInit()
val results = partitionOfRecords.map(..)
DBConnection.commit()
})
Best regards,
Ahmed Atef Nawwar
Data Management
Dears,
I needs to commit DB Transaction for each partition,Not for each row.
below didn't work for me.
rdd.mapPartitions(partitionOfRecords = {
DBConnectionInit()
val results = partitionOfRecords.map(..)
DBConnection.commit()
results
})
Best regards,
Ahmed Atef Nawwar
Data
Koeninger c...@koeninger.org wrote:
Map is lazy. You need an actual action, or nothing will happen. Use
foreachPartition, or do an empty foreach after the map.
On Thu, Aug 27, 2015 at 8:53 AM, Ahmed Nawar ahmed.na...@gmail.com
wrote:
Dears,
I needs to commit DB Transaction for each
PM, Cody Koeninger c...@koeninger.org wrote:
You need to return an iterator from the closure you provide to
mapPartitions
On Thu, Aug 27, 2015 at 1:42 PM, Ahmed Nawar ahmed.na...@gmail.com
wrote:
Thanks for foreach idea. But once i used it i got empty rdd. I think
because results
result
}
On Thu, Aug 27, 2015 at 2:22 PM, Ahmed Nawar ahmed.na...@gmail.com
wrote:
Yes, of course, I am doing that. But once i added results.foreach(row=
{}) i pot empty RDD.
rdd.mapPartitions(partitionOfRecords = {
DBConnectionInit()
val results = partitionOfRecords.map
Dear Taotao,
Yes, I tried sparkCSV.
Thanks,
Nawwar
On Mon, Mar 23, 2015 at 12:20 PM, Taotao.Li taotao...@datayes.com wrote:
can it load successfully if the format is invalid?
--
*发件人: *Ahmed Nawar ahmed.na...@gmail.com
*收件人: *user@spark.apache.org
*发送时间
: 09820890034
On Mon, Mar 23, 2015 at 2:18 PM, Ahmed Nawar ahmed.na...@gmail.com
wrote:
Dears,
Is there any way to validate the CSV, Json ... Files while loading to
DataFrame.
I need to ignore corrupted rows.(Rows with not matching with the
schema).
Thanks,
Ahmed Nawwar
Dears,
Is there any way to validate the CSV, Json ... Files while loading to
DataFrame.
I need to ignore corrupted rows.(Rows with not matching with the
schema).
Thanks,
Ahmed Nawwar
Dears,
Is there any instructions to build spark 1.3.0 on windows 7.
I tried mvn -Phive -Phive-thriftserver -DskipTests clean package but
i got below errors
[INFO] Spark Project Parent POM ... SUCCESS [
7.845 s]
[INFO] Spark Project Networking
Sorry for old subject i am correcting it.
On Tue, Mar 17, 2015 at 11:47 AM, Ahmed Nawar ahmed.na...@gmail.com wrote:
Dears,
Is there any instructions to build spark 1.3.0 on windows 7.
I tried mvn -Phive -Phive-thriftserver -DskipTests clean package but
i got below errors
[INFO
?
Thanks
On Mar 17, 2015, at 1:47 AM, Ahmed Nawar ahmed.na...@gmail.com wrote:
Dears,
Is there any instructions to build spark 1.3.0 on windows 7.
I tried mvn -Phive -Phive-thriftserver -DskipTests clean package but
i got below errors
[INFO] Spark Project Parent POM
/error
/file
/checkstyle
On Tue, Mar 17, 2015 at 4:11 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you look in build output for scalastyle warning in mllib module ?
Cheers
On Mar 17, 2015, at 3:00 AM, Ahmed Nawar ahmed.na...@gmail.com wrote:
Dear Yu,
With -X i got below error.
[INFO
12 matches
Mail list logo