---------- Forwarded message ---------- From: Raveendra Bhat <[email protected]> Date: Sun, Apr 8, 2012 at 12:57 PM Subject: Re: [Nepomuk] GSoC 2012 - Testing and Benchmarking in Nepomuk To: Vishesh Handa <[email protected]>
Hi, Coding period of GSoC starts from 21st may. But I have semister end exams starting from 14th may and finishes on 26th may. I have mentioned this in my proposal for Testing and Benchmarking in Nepomuk. So I wont be available for thee first week of GSoC coding period. On Wed, Apr 4, 2012 at 2:23 PM, Raveendra Bhat <[email protected]>wrote: > Hi Vishesh, > > About my Qt and KDE programming experience: > > I am familiar with Qt desktop and mobile application development. > > I was working on developing a desktop application which aimed at network > traffic monitoring and management. But the project was not feasible and it > already exists. > > I have basic knowledge of using nepomuk in desktop applications and I am > good with its metadata management, how it allows interconnections of data > between different desktop applications etc. > > Now I am trying to understand DMS APIs and DBus architecture. And how DBus > interface is used in Data Management Services. > > > > > On Wed, Apr 4, 2012 at 1:13 AM, Vishesh Handa <[email protected]> wrote: > >> >> On Sun, Apr 1, 2012 at 12:30 PM, Raveendra Bhat >> <[email protected]>wrote: >> >>> Hi, >>> >>> Sorry for the late reply.I had my academic tests and I was waiting for >>> your reply for my last mail. In the meanwhile I have done following works: >>> >>> 1. First of all I cleared my misconception about benchmarks, that it is >>> done only on two separate machines. And learnt that there are other >>> circumstances where benchmarks are also used to measure performances of a >>> function an operation etc... >>> >>> 2.Also I came across Sebastian Trueg's blog about why a central DBus >>> architecture is required for nepomuk data management. >>> >>> 3. Studied about QTestLib framework and API. How a basic test is created >>> and how QBENCHMARK macro is added to a test function that we want to >>> benchmark. >>> >>> 4. Also tried few of the examples given in QTestLib tutorial and >>> succeeded to write test functions for my own small sample class. >>> >>> Sorry, I couldn't do more work because of my college tests. I hope you >>> would give me a feedback on the work i have done and what more work to be >>> done. I believe it would help me come up with a proof of concept and >>> writing a proposal. >>> >>> >>> On Fri, Mar 30, 2012 at 12:50 AM, Raveendra Bhat <[email protected] >>> > wrote: >>> >>>> Hi Vishesh, >>>> >>>> Firstly thanks for your replyI went through your blog post and had a >>>> look at your code. I could able to understand your filewatcher test and >>>> identificationtest. I have some doubts with respect to what needs to be >>>> done in the GSoC period. >>>> >>>> 1. Project statement clearly says, I need to write testcases for >>>> nepomuk services. Does it involve ontology,storage,query and strigi >>>> services? What exactly do you mean by porting Nepomuk::Resources to the >>>> testing framework? Writing testcases for all Nepomuk::Resource >>>> properties/methods? >>>> >>> >> Yup. In fact some of the tests already exist, but since Nepomuk::Resource >> has been ported to the new architecture, it now relies on dbus and requires >> a running nepomuk storage. >> >> Nepomuk::Storage does a lot of caching, and tries to keep the cache up to >> date. That all needs tests. >> >> 2. According to my knowledge about bench marks, I believe that it >>>> should be done on 2 seperate systems under test. Can you please give me a >>>> clear picture of *benchmarking* the caching time,property fetch time? >>>> 3.Bench marking for file indexer means some tool like system monitor >>>> which displays memory and CPU usage of the indexer? >>>> >>> >> yes. In fact I'd like something to monitor the memory and cpu usage of >> virtuoso as well. >> >> Benchmarking strigi - Needs to be benchmarked (with memory and cpu usage) >> for different file types >> Benchmarking virtuoso - We need to be able to see how fast certain >> queries run and how long much memory does virtuoso consume. This will help >> a lot in the process of optimizing the sparql query libraries >> (kdelibs/nepomuk/query) >> >> So, overall here is what I'm looking for - >> >> * Benchmarks for strigi and virtuoso >> * Unit tests for Nepomuk::Resource >> * Integrated testing for the file-watcher and indexer ( This will involve >> creating a mock kde session and touching certain files to see if they are >> re-indexed ) >> >> Eventually I should be able to write a test for say 'storeResources' and >> see how much memory and cpu virtuoso are consuming. Or compare between >> different nepomuk versions to see how fast it is to push in large blobs of >> data. >> >> Plus, we're going to be completely re-structuring the Nepomuk::Resource >> internals, so your tests will go a long way in making sure that we do not >> break anything. >> >> Do you have any prior experience with Qt or KDE? >> >> >>>> >>>> Waiting for your reply. Mean while I'll be looking at your code and >>>> will come up with a proof of concept. >>>> >>>> >>>> On Wed, Mar 28, 2012 at 8:26 PM, Vishesh Handa <[email protected]> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Mar 28, 2012 at 6:10 PM, Raveendra Bhat < >>>>> [email protected]> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I am Raveendra from India. I am interested in writing a Test >>>>>> framework for nepomuk. I have a basic knowledge how nepomuk works.I am >>>>>> familiar with Qt C++ development.But i am not familiar with testing >>>>>> libraries in Qt/C++. >>>>>> >>>>>> Please can you tell me more in details about this project? I want to >>>>>> be a Kontributor. >>>>>> >>>>> >>>>> Hey Raveendra >>>>> >>>>> I'm basically expecting someone to continue with my test framework >>>>> [1]. That would involved porting the Nepomuk::Resource tests to the test >>>>> framework, cause they now require a dbus session. >>>>> >>>>> Additionally, I would want benchmarks on Nepomuk::Resource. How long >>>>> does it take to fill up the cache? Fetching properties and so on. You'll >>>>> even need to write more tests for it. >>>>> >>>>> Now with the introduction with Nepomuk 2.0 and the data management >>>>> API, I would want benchmarks on the new functions as well. ( They already >>>>> have a lot of unit tests, so you do not need to write those ) >>>>> >>>>> I guess, I'd also want some kind of benchmarks for the file indexer. >>>>> >>>>> That's just the start. Look at every existing nepomuk service. If they >>>>> do not have tests, they need them. >>>>> >>>>> [1] http://vhanda.in/blog/2012/03/nepomuk-test-framework/ >>>>> >>>>>> >>>>>> -- >>>>>> regards, >>>>>> >>>>>> B R Raveendra >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Nepomuk mailing list >>>>>> [email protected] >>>>>> https://mail.kde.org/mailman/listinfo/nepomuk >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Vishesh Handa >>>>> >>>>> >>>> >>>> >>>> -- >>>> regards, >>>> >>>> B R Raveendra >>>> >>>> >>> >>> >>> -- >>> regards, >>> >>> B R Raveendra >>> >>> >> >> >> -- >> Vishesh Handa >> >> > > > -- > regards, > > B R Raveendra > > -- regards, B R Raveendra -- regards, B R Raveendra
_______________________________________________ Nepomuk mailing list [email protected] https://mail.kde.org/mailman/listinfo/nepomuk
