Hi, I want to make sure I'm commenting on items that actually mean something to you guys, so I'd love to have someone tell me what the requirements are for the performance framework and what selection criteria is in place. I only know of one, that it be open source.
I'll respond inline below to some of your points, but the biggest point for me is the point I made about what gets delivered to implementers. If it's a requirement that the community delivery a framework that implementers can use to performance test their implementations, I'd love to hear the argument for jmeter in that case. As I mentioned, I don't see jmeter viable in that scenario. If this isn't a requirement and implementers need to roll their own solution or test a OOTB community deployment then jmeter would be fine. If it is a requirement, the only way I see you being able to meet it is to support an API that implementers can use to develop their use cases. The community won't know what features implementers have enabled/disabled/customized, they won't know the deployment architecture. It's nearly impossible to deliver a static set of tests from the community to all implementers and say, "go click run and all should work". Rest of comments inline below... On Tue, Jul 17, 2012 at 1:27 AM, Berg, Alan <a.m.b...@uva.nl> wrote: > Hi fellow hard workers, > > Good to have this discussion, I can learn from my peers. I will argue for > Jmeter below. I am certain it is a viable tool, however, I am not saying > that tsung is not also a viable tool. I want to make a fair comparison, so > need to note some differences in emphasis. > > In terms of maintainability, it is important for stress tests to be as > simple as is possible and data driven. It does not matter which technology > you use if you don't follow conventions and basic design patterns. > > > > I much prefer handling this programatically, > > I found it straightforward to mentor a functional administrator to stress > test using Jmeter. The GUI to create tests with its reverse proxy is not > difficult to explain. True: The plans are saved in a more complex than > Tsung XML format.In terms of recording tests i tend to use badboy ( > http://www.badboy.com.au/), save in Jmeter format and tweak. Give it a > go. > Couple points here: 1. I understad that showing someone the GUI seems to make things easier, and it may for some, especially non-technical people (BA, SME types). But I find the GUI gets in the way of rapid test developmen. 2. I never understood the value of getting non-technical people running performance tests. Even if the GUI makes generating a performance test easy, there's the setup of the system and resource monitoring, then collecting all the data and making sense of it. Performance tests aren't functional tests, whether a performance test passes or fails, is good or bad, requires a good amount of technical expertise. So the argument here really boils down to GUI vs. API. I'd just encourage you to evaluate all the implications of the GUI workflow. > Jmeter has lots of assertions including reglex. You simply add the > assertions as children under the http samplers. > > > > Dynamic Variables - again it's jmeter's Bean Shell PreProcessor workflow > vs tsung's regexp param in the http request XML > > You can use dynamic variables in Jmeter as well. Here are the list of > functions to use in any sampler and the definition of variables. > http://jmeter.apache.org/usermanual/functions.html > > If all this wiring is not enough, then you can fall back to the > beanshell. At this point you are moving away from KISS and that should be > a warning about maintainability. > When I use the term "dynamic variables" I'm referring to variables that are only set at runtime, are thread specific and are set by parsing some previous requests HTTP response. AFAIK that type of variable must be set via jmeter's beanshell preprocessor. In any case the point is, its not as straight forward as tsung. There is a working example of a framework of Jmeter in CLE land which can > work with CLE, Hybrid and extended to OAE. This will allow us to share data > models across communities. If later some one wishes to move to a hybrid > instance then they can leverage there knowledge from CLE land. Now, it is > true that this is currently not a reality (as Lance fairly pointed out), > however, if we plough the land then seeds can grow. > > There are plenty of examples of Jmeter used at large scale with a large > number of developers. It has a well established community. > > Here is a book on the subject: > http://www.packtpub.com/beginning-apache-jmeter/book > > Here are some links: > http://wiki.apache.org/jmeter/JMeterLinks/ > > Here is a cloud service: > http://blazemeter.com/ > I understand the community is large, really large actually. But the way you are forced to collaborate, or the limitations of collaboration on the actual framework that's developed is not optimal mostly due to the GUI interface. We'll all be forced to re-record use cases as implementers, so I'm not sure what value implementers would be leveraging from any community work. Were the community to release a tagged API, implementers would just write very light weight test cases that exercise the API. But again, that goes back to requirements and selection criteria. At their core, both jmeter and tsung are great performance tools, the question is how do you want to work with the performance framework inside the OAE dev community and with implementers at large. That should guide selection criteria IMO. Thanks Alan -Kyle Jmeters main weakness is that it does not understand JavaScript easily. > Selenium webdriver with Qunit is the way forward for that. > > Looking forward to a detailed response. > > Alan > > > > > > Alan Berg > > Group Education and Research Services > Central Computer Services > University of Amsterdam > ------------------------------ > *From:* oae-dev-boun...@collab.sakaiproject.org [ > oae-dev-boun...@collab.sakaiproject.org] on behalf of Kyle Campos [ > kcam...@rsmart.com] > *Sent:* 17 July 2012 03:25 > > *To:* Branden Visser > *Cc:* oae-dev@collab.sakaiproject.org > *Subject:* Re: [oae-dev] Load testing tool > > Branden, > > I'll just jump into the technical reasons I see jmeter being more > difficult to work with from a community perspective. Really you touched on > it in your positive point #2 for Tsung and that's Tsung's XML structure. > But the implications of this deserve more highlight especially in the > context of the community requirement for automated/nightly performance > tests. > > What I don't like about jmeter is primarily its GUI dependency and the > implications of it on workflow, extensibility, collaboration etc... It > makes for really slow test development with large use cases and more > difficult maintenance/collaboration between teams. Concrete examples > below... > > 1. AJAX request handling - I don't think they could have made it any > more convoluted with their "logic controller". Using logic "wizards" in a > GUI is just ugly and makes me cringe. I much prefer handling this > programatically, which is what I did in tsung and what our abstraction > layer makes really easy and transparent to test writers. If you want the > pain, then go through jmeter's GUI workflow for developing the logic around > username lookup on signup, then factor in dynamic substitution with reading > in usernames from an external file, now think about how easy it would be > for someone else to change this logic at runtime. Ick. > > There's 1 place that controls this in my tsung framework. You don't > need to go through this pain at all writing test cases, and even if you > built it from scratch it's very simple. > > 2. Dynamic Variables - again it's jmeter's Bean Shell PreProcessor > workflow vs tsung's regexp param in the http request XML. And again this is > also wrapped in our framework. > > Both of the above examples will be in heavy use in any good OAE > performance test. > > I think your point about releasing performance tests as an artifact with > the release is good in principle, but I don't see jmeter being the best > vehicle to deliver those. As a deployer myself I'd much rather the > community provide a tagged API set that I can leverage to build that > profile MY use cases(our tsung framework is built with that in mind). I > don't want a set of static scripts that may or may not execute in my > environment and that may or may not profile anything of use for my > implementation. There's no way the community can know those things or build > performance tests that address all those use cases. > > I've gone through this technology selection with a more broad set of > requirements than most folks who use jmeter need. jmeter is a very common > developer tool to quickly script up a simple performance test. I've never > seen it used very successfully in a broad context with many devs > contributing, with it running complex use cases, against a young code base > and it being maintainable over time. That being said, you are all very > talented and I'm sure you could get it to work for you, but I'd be very > careful that you don't paint yourself into a corner with burdensome > maintenance and test development workflow that limits contribution. > > My $0.02 > > -Kyle > > On Mon, Jul 16, 2012 at 4:42 AM, Branden Visser <mrvis...@gmail.com>wrote: > >> Hi everyone, >> >> I've been putting some research into an appropriate load testing tool >> for us to use, I have been focusing mainly on JMeter and Tsung. >> >> Tsung, as far as I can tell, has the following selling points: >> >> 1. Its erlang guts let it drive many concurrent users more efficiently >> than its competitors >> 2. Its XML structure seems quite leaner, making it easier build tests >> from source >> 3. There is existing work from rSmart that can be leveraged to drive >> our performance tests [1] >> >> With JMeter, I see the following benefits: >> >> 1. Lower overhead in spinning up a test, once the tests are already >> built (VIA a maven plugin) >> 2. I think the increase in complexity of JMeter comes with the benefit >> of extensibility (unless you know erlang, I guess..) >> 3. There is an existing community effort that can be leveraged to >> drive our performance tests [2] >> >> They both have exactly 3 advantages, I don't know what to do!? >> >> Just kidding. But, unless there is quantifiable evidence that suggests >> JMeter's performance will not suffice to properly test OAE (I have not >> been able to find such evidence yet, but others may have more data), I >> propose that we move forward with JMeter. I see value in making the >> JMeter tests executable from the same command-line on which OAE is >> built. I think this moves towards making the JMeter tests an artifact >> of the release and not some orthogonal set of scripts uploaded >> elsewhere, which in my experience tend to become of questionable age >> and relevance. I think it will become more valuable to our deployers, >> and the deployers' performance test data (which would hopefully be >> more abundant with the lower barrier of entry) will become more >> valuable to the core team. >> >> [1] https://github.com/kcampos/Open-Performance-Automation-Framework >> [2] >> https://confluence.sakaiproject.org/display/QA/CLE+Load+Test+Framework >> >> -- >> Cheers, >> Branden >> _______________________________________________ >> oae-dev mailing list >> oae-dev@collab.sakaiproject.org >> http://collab.sakaiproject.org/mailman/listinfo/oae-dev >> > > > > -- > Kyle Campos > Director of Quality Operations / rSmart > kcam...@rsmart.com > skype: kyle.campos > phone: 623-455-6180 > GTalk: kcam...@rsmart.com > > -- Kyle Campos Director of Quality Operations / rSmart kcam...@rsmart.com skype: kyle.campos phone: 623-455-6180 GTalk: kcam...@rsmart.com
_______________________________________________ oae-dev mailing list oae-dev@collab.sakaiproject.org http://collab.sakaiproject.org/mailman/listinfo/oae-dev