I’m currently reading Eric Ries’ book The Lean Startup. Ries talks a great deal about experimenting and validating learning. Often we provide products or create services because we think it is what has an impact or is what our users want. But in a number of examples that Ries provides, adding new features or services does not create any change at all and a lot of what organizations do is superfluous. This leads him to ask “which of our efforts are value creating and which are wasteful?”
To answer this question he says that we need to identify and test our assumptions through a number of small experiments. He also says that we need metrics that can tell us something as opposed to vanity metrics. An example of a vanity metric in libraries would be something like gate count. It says “we have a bunch of people coming in and out of the building,” but it doesn’t go to much farther than that. Why are these people coming in? Does it have something to do with our efforts?
He also talks about “success theater,” (the work we do to make ourselves look successful). It’s good to have charts and graphs that go up and to the right, but do those actually tell us anything? Is it our efforts that our making a difference or something else? Are we accidentally getting it right? Is it a fluke? What happens if the numbers go down?
So this brings me to my question: what are the assumptions we have in libraries and how to we test them?
Assumptions abound in libraries: students need research help from librarians, we need to be on social media, students need to be taught how to use a database. These assumptions might be different from institution to institution, but each place has their own assumptions.
We also have a variety of metrics and numbers that we can pay attention to in libraries: gate count, database statistics, circulation numbers, reference statistics, number of classes taught, assessment data, student surveys, etc. Which numbers are really valuable for testing assumptions and which are just noise?
What are some of our assumptions in libraries? What assumptions do you test at your library? What assumptions would you like to test? What metrics do or could you use to validate your learning?