Re: LoadTesting ITSM 764
** There is probably some confusion here as I have been with Scapa Technologies for close to 15 years and have been part of the product development ever since but did not understand what was being refereed to as a "Scapa beta". Some background information may be useful to avoid any misinterpretations (even if probably too detailed :-) - Scapa Technologies was founded in 1998 and product development started with a flexible platform for testing and performance validation. It was designed to be generic and provide a framework for adding support for different protocols and applications. - After covering few "essentials" like HTTP, ODBC, mapi... first application support was added and it was for Essbase OLAP performance testing. Focus was very much on simplification, automation and optimisation of the test creation process and this approach and some of the methodologies "leaked" to later products. User transactions with the system were being captured, parametrised and processed to create valid tests. - Scapa tests are always created as a transactional (data linked) sequence of API or protocol calls making them hard to distinguish from real users, but much easier to control. - Some 13 years ago we opened up a Scapa API to allow our partners to add support for additional technologies and applications but no results came from that and development of the Scapa product was done fully in house ever since. - First Scapa product for Remedy testing came out some 10 years ago, allowing automatic test creation based on user interaction with the systems through Remedy user tool (AR system API). We have worked with BMC to get all that was needed to recreate user transactions in a test. - Scapa is using a standard client to capture transactions and also acts as a client to gather application and form data from the server needed in the process of crating the valid transactional sequence or Scapa test. This was not a trivial development effort (even though we already had a stable platform to work on) as it took number of development man years to make the whole process correct, practical and simple for end users. - Over the years parts of the Scapa Remedy product were completely rewritten (speed optimisation to allow very large number of users to be simulated from a modest hardware) - In the last 10 years Scapa for Remedy was used to test, validate, optimise and maintain number of systems of various size including some of the largest Remedy deployments (largest one that I have been involved with was designed for user support of some 20 million customers). - As the User Tool has been phased out Scapa Remedy technology, methodology and knowledge was used to replicate in the Mid Tier space what we have already done for User Tool allowing side by side (User Tool + Mid Tier) or pure Mid Tier testing. - Few other technologies were added to Scapa suite over time, probably most relevant one being support for thin client (Citrix, RDP, VMWare) but also allowing "client - server" testing using real clients (User Tool, custom clients or web browser). This permits client side GUI to be driven through a series of user actions (mouse clicks or key presses) instead http protocol or api. One thing that I would agree with is that blasting "random" API calls at server would never accurately recreate transactional nature of user interaction with Remedy system and therefore is of little or no use. Scapa however was never going in that direction. Armen Avedisijan Scapa Technologies ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org "Where the Answers Are, and have been for 20 years"
Re: User locks in db causing stuck threads that aren't timed out
Paul, RE: 2. Does anyone have any idea of a way to detect these before it gets real bad, other than looking for steady thread growth? If time that it takes to get "3rd party vendor information for the site a device is associated with" is impacted for "other" sites (not the site on which all threads are stuck) it may be enough to run a fairly simple health check transaction (better through Remedy and not direct SQL) once every few minutes. Transaction time outside defined boundaries may be used to raise an alarm to let you know that things are bad. It would avoid having to run a query against every site or depend on a thread count. It may also be useful to try to recreate an outage in a controlled environment that is causing the problem that you are describing (to avoid having to wait for an issue to show itself on a live system). This could also be a fairly simple set of transactions (read + insert all for a single vendor) but run with a larger simulated user population. Probably few options on how this can be done but let me know if you need any help. Regards, Armen Avedisijan Scapa Technologies Limited ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org "Where the Answers Are, and have been for 20 years"
Re: LoadTesting ITSM 764
Using a single user account in a test like that would also mean that as the test progresses user will have rapidly increasing number of open tickets created. So in addition to caching effect, if a test is allowed to run long enough or repeated few times there will probably be a gradual degradation of system responsiveness (signature is similar to a memory leak ). All of that can be easily avoided if you have more than 1 system user, you allow users to close as well as create tickets so that total number of opened tickets is fairly steady and you also allow users to create tickets for more than 1 customer (again avoiding cashing and clashing). While you can keep the whole test fairly simple you would introduce enough variability to get meaningful and useful results avoiding both false positives and false negatives. And that should be valid for both Mid Tier and User Tool. As Dom mentioned for this to be practical you should use a tool that not only recognises Remedy API or BackChannel calls but can also identify set of IDs needed and ideally also creates a set of parameters that will allow you easily use different user or customer data (capture, auto parameterize, replay). Using Scapa creating a test like yours should not take more than 15-20 minutes and if you like I can show you the whole process on one of our servers (either for User Tool or Mid Tier). Regards, Armen ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org "Where the Answers Are, and have been for 20 years" Scapa Performance Testing ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org "Where the Answers Are, and have been for 20 years"
Re: LoadRunner User simulation
But also be careful not to oversimplify your tests. If for example testing focuses on Incident Management, one transaction would normally be new incident creation and test would need to create a valid new incident regardless of the type of testing. Running concurrent ITSM transactions does not mean that sending static HTTP requests (hard coded data) will be sufficient as this would most likely generate a significant number of application level errors clouding the overall performance picture (to continue with the old fashioned use of the word :). Tests transactions also need to be valid for things like automatic ticket assignment or SLA (in my incident creation example) to be invoked correctly making test results more relevant. Choice of testing tool should not drive testing requirements and it is best to start with what is required outcome of the testing and only then look at the tooling. While it may be possible to use a generic HTTP testing tool to load test Mid Tier, data encoding and AR specific response validation complicate things quite significantly. Things are much better with one of the AR specific web testing tools that can handle ITSM Back Channel calls seamlessly and choice there is Scapa TPP or Silk. ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org attend wwrug12 www.wwrug12.com ARSList: "Where the Answers Are"
Re: LoadRunner User simulation
Rick, I would just add that in the same way as using multiple user accounts may change performance that you are observing you also need to consider and handle data variability in your test transactions. Introducing variability (variable query parameters to start with and then correct storage and passing of the query results to subsequent queries) will have effect on the overall validity of the test and accuracy of the results. So if you need to generate accurate load and measure system performance you need to introduce variability to the test data. Variability of the user names is in a way just a special case of the broader question of handling data variability. That of course if you assume that system can handle and manage well multiple authenticated user sessions from Mid Tier all the way to the database. Armen ___ UNSUBSCRIBE or access ARSlist Archives at www.arslist.org attend wwrug12 www.wwrug12.com ARSList: "Where the Answers Are"