0

Putting Our Assumptions To The Test

leanstartup

I’m currently reading Eric Ries’ book The Lean Startup. Ries talks a great deal about experimenting and validating learning. Often we provide products or create services because we think it is what has an impact or is what our users want. But in a number of examples that Ries provides, adding new features or services does not create any change at all and a lot of what organizations do is superfluous. This leads him to ask “which of our efforts are value creating and which are wasteful?”

To answer this question he says that we need to identify and test our assumptions through a number of small experiments. He also says that we need metrics that can tell us something as opposed to vanity metrics. An example of a vanity metric in libraries would be something like gate count. It says “we have a bunch of people coming in and out of the building,” but it doesn’t go to much farther than that. Why are these people coming in? Does it have something to do with our efforts?

He also talks about “success theater,” (the work we do to make ourselves look successful). It’s good to have charts and graphs that go up and to the right, but do those actually tell us anything? Is it our efforts that our making a difference or something else? Are we accidentally getting it right? Is it a fluke? What happens if the numbers go down?

So this brings me to my question: what are the assumptions we have in libraries and how to we test them?

Assumptions abound in libraries: students need research help from librarians, we need to be on social media, students need to be taught how to use a database. These assumptions might be different from institution to institution, but each place has their own assumptions.

We also have a variety of metrics and numbers that we can pay attention to in libraries: gate count, database statistics, circulation numbers, reference statistics, number of classes taught, assessment data, student surveys, etc. Which numbers are really valuable for testing assumptions and which are just noise?

What are some of our assumptions in libraries? What assumptions do you test at your library? What assumptions would you like to test? What metrics do or could you use to validate your learning?

0

Cutesy 2.0 names

it's available!

There are a few rules you need to follow when naming your new startup web 2.0 company.

  1. Have real cutesy names (These would be names like Bebo or Jing. It gets your customers excited about something cutesy while not telling them anything about the company. It’s like a surprise.)
  2. Your best bet is to have a name that includes double letters together so customers can remember it. (Names like Joost, Meebo, Moodle, Joopz, Goowy, or Blummy work well. Extra points if you put three letters in your name like Zooomr.)
  3. Rule two is especially important if you are in the social bookmarking business. (Two big ones Digg and Reddit know the score. Del.icio.us is getting along without it, but the way they split up their name is cutesy enough)
  4. Hurry! All the good names are running out! (I tried making up a name for my own startup 2.0 web site. I figured if I could get a sweet enough name Yahoo or Google [both double letter names] would buy me out. Unfortunately Squibble and Grinky were already taken.)

Now, I have decided to start up a consulting business where I help fledging 2.0 companies find a name.

Me: “Alright, tell me what your company does exactly.”

Them: “Well, we are synergistic, hybrid blend of chat, VoIP, social networking, and user created content of some sort.”

Me:”Hmmmm………..you’re now Chubblekins. Wait, wait, wait! Zoombango!”

Those two are free. The rest you’ll have to pay for.