,

The importance of empirical evidence for early-stage deeptech companies

In light of the latest news on Theranos, and a seemingly endless amount of swift capital pouring into deeptech startups today, how can investors make sure similar mistakes are not repeated?

While our investments are based on market-first principle, as a firm specializing in deeptech companies, there are a few key questions we consistently ask ourselves during the diligence process to ensure that our decision is based on scientifically sound evidence.

Teasing them out, they largely run in parallel to three key sections in a standard peer-reviewed research article: results, methods, and discussion. For those of you who have not had the pleasure of going through a scientific publication process, these are three must-have sections with preceding summary/introduction and conclusion as a wrap.

Results: what happened

One of the first questions we ask ourselves, focusing on Series Seed companies, is “does it work?” And a simple “yes” does not suffice. 

It needs to be backed up with experimental (and ideally quantitative) results. Similar to how a picture is worth a thousand words, a well-put-together figure is representative of the stage of development of a product.

If you are claiming your product is more efficient than the status quo, let’s see it in the figure. Show us the controls, the parameters being measured, error bars, and some stats. This is obviously not applicable to all product types, but the concept stands: show, don’t tell.

Methods: how did it happen

The devil is in the details. Frankly, the method section is usually the most boring to read (and to write). But, it’s there for a reason. Your results mean very little without detailing out how you got there. Now, we do not expect the same level of detail as if it were going to be sent for peer review. There’s no need for showing all your cards— whether that’s a trade secret or IP in-development—but it is crucial to understand the basis for how the results are derived. Which prototype was used to run the experiment? How many times were the measurements taken? Was the setup comparable between the control and the data point in the spotlight?

These questions may seem like housekeeping protocol in hindsight, but they make a difference in how your results are interpreted when answering the questions, “Does it work?” 

Now, there’s an extreme case in intentionally presenting sample reads from a commercial device as if they were derived from your product, but ambiguity in methodology can be misleading even with the best intentions. It’s our responsibility to do thorough diligence, as much as it is the entrepreneurs’ responsibility to be transparent and honest.

Discussions: why did it happen

This is where the results are interpreted in connection to existing knowledge; where the mechanism is attempted to be understood; where a “black box” is a less than satisfactory answer – a sanity check.

If your product is responsible for breaking an existing boundary, what is it that you do differently that mechanistically leads to the advancement? There’s danger in trying to come up with a theory to connect and explain limited data points, but blindly accepting that there’s a magic black box is a trap. Besides, that causation is often expected to be the core technology a company specializes in so it should be well understood and explainable.

One final thought

In hindsight, it is easier to list out what has gone wrong. This is an excellent reminder for all of us in the deeptech investment space to not get carried away by fear of missing out, the influx of competing capital, and to avoid compromising quality for speed. 

When in doubt, tune out the noise and let the data speak for itself.

Leave a Reply

Discover more from Creative Ventures

Subscribe now to keep reading and get access to the full archive.

Continue reading