A tidy little analogy between architecture school and automated test suites
I had a professor who would turn his students’s scale models around in his hands, holding them about two inches from his nose, and very subtly wiggle various bits of the model.
Fairly often, we’d hear a snap, a piece would come off in his fingers, and–rather than apologize–our professor would shrug, say that that piece really wasn’t in service of the design idea and didn’t need to be there anyway, toss it aside, and resume his deconstructive criticism.
For the most part he was right: any piece that he can remove with a small wiggle is quite likely to be poorly attached conceptually to the overall design, and its actual physical attachment is a surprisingly accurate and understandable-to-know-nothing-kids proxy judgment for the harder conceptual one. (We did eventually start to grok the deeper idea, but mainly the short-term result was that we started gluing the hell out of things).
I was remembering this, and jumped from it to a terrible idea for ruthlessly helping people internalize what makes for a good automated test suite:
Imagine an automated testing tool that runs the entire test suite once per method, removing that method and checking if at least one test fails. If the tests still pass, the tool fails to apologize, shrugs, says it didn’t need to be there anyway, deletes the method entirely, and resumes with the next pass.