Hi Johnu, Sorry for the lack of the documentation. There is a micro-bench that has been included to the latest release, you can find it in the module /mnemonic-benches/mnemonic-sort-bench//, this module is used to prove the performance improvement about durable native computing by avoiding the (un)marshaling only, because it is CPU-bound bubble sorting to see the performance gain instead of IO/Memory-bound workloads. you can find the executable python scripts under /incubator-mnemonic/mnemonic-benches/mnemonic-sort-bench/bin//
Regarding the Hadoop integration workloads. the module/mnemonic-hadoop/mnemonic-hadoop-mapreduce// has been implemented to support Hadoop MR usage scenarios, but it is still under refactoring for code improvement, if desired, please actively join the development of the module /mnemonic-hadoop-mapreduce/ /and the workloads of Mnemonic based Hadoop MR benchmark workloads. The linked integrated Spark benchmark workload is based on the old APIs of Mnemonic, we are planing to migrated to new APIs on top of coming module /mnemonic-spark/, if you interested, please join us to development the module and integrated benchmark-workloads. you can check the code changes of linked integrated Spark as follows git clone -b NonVolatileRDD --single-branch https://github.com/NonVolatileComputing/spark git diff HEAD^.. I think that needs to be migrated to new APIs and sorry again for the confusion. We are going to take actions to implement a new benchmarks for Spark in unintrusive way based on latest version of Spark and Mnemonic. that should be happened next months as well as I hope we could establish a specific joint work-group for industrial benchmarks of Spark-Mnemonic, Hadoop-Mnemonic and Flink-Mnemonic, etc. if desired. Very truly yours +Gary On 2/15/2017 11:47 AM, Johnu George (johnugeo) wrote: > Hi, > I would like to do some performance tests to start with mnemonic. I > couldn’t find any documentation under > https://mnemonic.incubator.apache.org/docs/ to start with. In github, I saw > a link to https://github.com/NonVolatileComputing/spark which is said to have > better performance than the default one. I would like to know the best way to > test the performance of any application (eg. Spark provided in the github) > with/without mnemonic? What was the test setup? How can I emulate the > hardware? It would be helpful if anyone can point out to me some > documentation to help me setup a test instance and see the performance gap? > > > Thanks, > Johnu
signature.asc
Description: OpenPGP digital signature
