On Oct 7, 2007, at 12:31 AM, Chad Humphries wrote: > Scott, > > I don't really have a lot to contribute on how to make it faster, > other than to outline what we've been doing on our projects. > > On one of our current projects we have the following 2570 examples > that run in ~70 seconds on our pairing stations (mac minis, 1.83
I assume your "pairing stations" are two separate mac-minis, in which you practice pair programming? Or is this a cluster of two mac-minis? But this sounds great - 70 seconds for 2500 specs. How many of those are model specs (that hit the database)? > c2d). In general across our various machines is at or a little more > than a minute for specs for controllers, models, helpers, lib, views, > and plugins. Our Story suite takes longer, but it's still under > development so I don't really count it at this point. We have Ruby > 1.8.6 installed from MacPorts on all machines, as well as MySQL 5. > (current as of a month ago) from macports. > > We make good use of mocking and stubbing through our controller > tests, and little use of fixtures. We primarily use the > spec_attribute_helper (or factory method) as Luke Redpath and Dan > Manges have outlined in their respective blog articles. I've been > looking at deep test, or possible spec_distributed as a way to speed > things up more. Our main issue is our precommit task (rake cruise The factory method (or attribute_helper) still hits the database. I don't see it as any sort of performance gain. In fact, I've even developed a plugin around the Factory idea, and it was only when I started using it in all of my tests that the speed really started to affect me (I was using mocking/stubbing, with much frustration prior to that point). But to me it's pretty clear the plugin (or the factory) is not the problem - the hit is the database. DHH saw this hit, and since they were using fixtures,he found that creating the fixtures, and then wrapping each test in a transaction was a huge performance gain. I wonder if the same would be true with setups/ before(:each)... The obvious thing to do to solve the performance problem is to remove the hit to the database. The question is: At what level of abstraction should this be done? The one camp (which would include fellows like Jay Fields), would mock/stub everything they don't write. For me, I see testing as more than testing - it's the documentation to my code which never lies to me (this documentation is so good, that I can give it to my boss, who is not a programmer). For that reason (testing is not about testing), I could never stub/ mock AR inline - that is, in the tests themselves (although if I had some sort of external plugin to do this for me, I may feel differently about it). The other alternatives seem to be: b) writing a sql parser, and c) speeding up the database (deeptest, in-memory databases, more hardware, etc). The sql parser is a big job, and speeding up the database doesn't seem to give the performance improvement that I'm looking for. As for deeptest, I wouldn't mind helping out with getting it to work for rspec (email me off-list at [EMAIL PROTECTED], if you are interested in my help). I haven't explored spec_distributed at all (in fact, I didn't even know it existed, although I had a similar thought a few days ago). > which our ci server also runs) executes rcov to test for full > coverage and adds 15-25 seconds to the whole thing bringing it up to > a minute and a half. Coverage is not a big point for me. I'm happy with running rcov only once a week, assuming that I have 100% right now, and I develop everything test-first. Right now I'm no where near 100%, so I'm not too worried about it. Regards, Scott _______________________________________________ rspec-users mailing list [email protected] http://rubyforge.org/mailman/listinfo/rspec-users
