The Lean PhD Student — Can The Lean Startup principles be applied to personal productivity in graduate school?
The lean startup methodology consists of a set of principles that were proposed and popularized by Eric Ries in the book The Lean Startup (and elsewhere). He believes that startup success can be engineered by following the lean startup methodology. Eric Ries defines a startup as “a human institution designed to deliver a new product or service under conditions of extreme uncertainty”. If we replace “product or service” by “research result”, that sounds awfully similar to what a PhD student has to do. Indeed, the similarities between being a junior researcher, such as a PhD student, and running a startup have been often pointed out (for example: , , ). In light of this, I propose that the lean startup methodology can also be applied to academic pursuits of a PhD student. Below, I adapt some of the most important lean startup concepts for application to a junior researcher’s personal productivity and academic success.1
If we want to carry over startup concepts to academic research, then the first (and most obvious) question is, what would be the “product” and who would be the “customer” of the PhD student? I think the analogy here is quite straight forward. The “products” of a PhD student clearly are the student’s peer-reviewed publications, conference presentations, the dissertation, software releases, etc.; and the “customers” are other researchers and to a much smaller extent the general public. An especially important set of (quite often tough) “customers” includes the journal or conference paper reviewers and editors, and the student’s committee members.
Build - Measure - Learn
At the center of the lean startup methodology is the so-called build-measure-learn feedback loop. One of the main goals of the lean startup methodology is to minimize the time (and other resources) required to pass through the build-measure-learn feedback loop, and to maximize the number of times that the build-measure-learn loop is completed. Its adaptation to academic research would be something like the following.
Start with a novel idea, whose good execution you assume to be valuable to your scientific audience, and then share a minimally viable execution of the idea with members of your audience.
The concept of a minimum viable product (or MVP) is especially important during this stage of the lean startup trajectory, in order to minimize time spent in this stage. The Lean Startup defines the MVP as the “version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”. Analogously, I think that a minimum viable research result can for example consist of an exploration of the main idea on small samples, toy problems, and special cases, designed in such a way that would allow the researcher to obtain sufficient amount of feedback on his/her idea with the least effort.
Observe how other researchers react to your idea and its minimally viable execution.
In this step it is important to use so-called actionable metrics, as opposed to vanity metrics. Actionable metrics accurately reflect the key success factors of the project, while vanity metrics are measurements that give “the rosiest picture possible”. With regard to academic research actionable metrics may be (not an exhaustive list):
- Direct feedback from researchers who you trust.
- Others applying your work to their own research.
- Major contributions to peer-reviewed publications.
And (academic) vanity metrics may include:
- Co-authorships on other people’s papers, while having only slightly more contribution than none.
- Association or acquaintance with a “big name” scientist.
- Number of views of a researcher’s homepage, or paper view count on some online platform.
- Appearance in mainstream media.
Measuring the right metrics is a big part of what Eric Ries calls innovation accounting.
Learn how valuable your audience actually considers your idea to be based on the received feedback and your actionable metrics of choice. Utilize that new knowledge to improve the initial idea in order to make it more valuable to a targeted scientific audience, and adjust your assumptions about what your audience needs.
In The Lean Startup this type of a modification process on your initial idea is called a pivot, or in Eric Ries’ words: “A pivot is a structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.” Having obtained the corrected research idea, you would go back to step 1 and reiterate the whole process.
So, what do we get out of all of this? I think that a clear strategy emerges here.
The strategy consists in striving to push results out fast, in order to receive feedback fast, which (feedback) is evaluated according to a suitable set of actionable metrics that were chosen in advance. That is, one needs to be writing papers fast, initially without worrying about things outside the scope of an MVP, such as the perfect word choice, the optimal formatting, coverage of all corner cases, etc., in order to get and measure the feedback from members of the target scientific audience quickly. Then the ideas need to be improved upon according to what was learned, and the process is reiterated. One of the major goals of the PhD student should be to minimize the time required to pass through this loop.
So is this a good strategy for a PhD student? Well, I can’t say before I try it out . One crucial factor not mentioned here though is the PhD advisor. In my case I have a lot of freedom to come up with my own projects and pursue my own ideas as long as they are within a specific (but somewhat loosely-defined) area, and I could totally incorporate this lean startup inspired research strategy into my work. On the other extreme, there are professors who micromanage their PhD student’s every step, in which case the PhD student will find it much harder to experiment with their research strategy.
Please note that I’m writing from the point of view of mathematical, statistical, and computational sciences, rather than from the viewpoint of experimental sciences. ↩