Marketplace Research Methodology

The goal of this project was to study patterns in successful online marketplaces to identify and evaluate new opportunities.

We started by gathering lists of marketplaces from Crunchbase and Pitchbook, which came with some basic data on funding and business status. We compiled a list of about 4,500 marketplaces. From this list we prioritized about 1,200 of them that had a crunchbase rank above 20,000 or had raised at least $1m of venture capital. When we identified large marketplaces that were missing we added them to the list.

A team in the Philippines helped collect a number of data points on each company, including:

  • Revenue
  • # of suppliers
  • # of buyers
  • Fees charged
  • Revenue
  • GMV
  • Supplier/Buyer information:
    • Who is on the buyer side (e.g. diners for OpenTable)
    • Who is on the supply side (e.g. restaurants for OpenTable)
    • Company vs individual
    • Business (e.g. Uber) vs. personal (e.g. Craigslist)

Our team went through this research list to verify the data. Ultimately we were able to collect most of these data points for 500 of the companies on the list. Another 342 turned out to not be marketplaces or it was unclear and 203 were marketplaces but we couldn’t find certain key metrics. 168 had websites that were down or the site wasn’t accessible from the Philippines.

We filtered this list further to the top 100 and double checked each of the key numbers for these marketplaces (see top 100 marketplaces). We also did more manual one-off research on how these marketplaces were seeded and scaled.

There are a few challenges with this approach:

  • Old data – Finding data for non-public companies is challenging. Some of the data may be several years old and we didn’t make an effort to estimate the most recent year’s data.
  • Revenue data – We caught dozens of cases where companies referred to their Gross Merchandise Value (GMV) as Revenue, likely to make the numbers seem more impressive. There are likely some cases where we missed this, although this is more likely in marketplaces that are smaller since we carefully checked the top 100.
  • Success bias – Very little data on failure – because of this approach, we often did not capture much data on companies that had failed, especially if it was 10 or more years ago. This makes it challenging to determine which strategies are more difficult than others.

If you have any feedback on our approach or notice an errors, please don’t hesitate to reach out:

Thank you to Adam Wagner, Nima Wedlake, and Corey Reese for contributing data, doing data QA, or reviewing the findings.