Why the Distributed Growth Model Is Failing Research Teams—and What to Build Instead
by Kate Towsey
Subscribe to get sharp thinking all about ResearchOps delivered straight to your email inbox. It’s free!
The ResearchOps Review is brought to you by Rally—scale research operations with Rally’s robust user research CRM, automated recruitment, and deep integrations into your existing research tech stack.
A few weeks ago, I saw a graph in The Economist that was deeply enlightening. Unfortunately, I can’t find the article again, but it illustrated the surge in investment that defined the tech wave between 2010 and 2022. More than just an illustration of the economy, this graph helped me understand that my career had been buoyed not only by hard work, ingenuity, and (I like to think) smarts, but also by impeccable timing. I joined the tech workforce at just the right time to enjoy an era of heedless spending on salaries, equity, and snacks—and sometimes rampant hiring. With this new framing in mind, plus years of studying the economy, I’ve come to understand that companies in emerging fields, like technology and now AI, operate according to a growth model, not a profit model, with significant implications for how teams within the company should operate to succeed. This context is important, vital even, because it helps explain what’s happening in the fields of research and ResearchOps, and how to build teams that thrive even when the economy shifts, or a novel field becomes mundane—tech1 is no longer sexy, instead AI, defence, and the space sectors are.
How Growth and Profit Models Are Reshaping Research
The growth of the ResearchOps profession in the past decade has been remarkable. What started as a niche role in the most progressive technology companies of our time is now a role that’s hired by all sorts of companies, from startups to the BFSI2 sector, and legacy media giants. As part of that evolution, ResearchOps job descriptions and, by extension, ResearchOps professionals themselves, are becoming more specialised.3 Laundry-list job descriptions—the type that list an impossible scale of work, never mind the scope—are becoming less prevalent and are being replaced with well-defined requests for strategists and systems designers, and knowledge and data specialists tasked with building the capability, both human and technical, for entire organizations to generate and consume insights at high speed and at large scales. And sometimes they’re being asked to do this without researchers.
If you work in ResearchOps and you’re looking for a more expansive career path, this may sound like great news, and it is. If you’re a researcher, this may sound horrifying, and it should. The expectation that ResearchOps can run the show solo isn’t just a problem for researchers; it’s also a problem for research operations and, in the long term, for the companies that take this tack. In this article, I’ll explain why this trend emerged (hint: growth and profit models are key), why it’s unsustainable, and why—and importantly how—research and ResearchOps leaders must partner in new ways to build the future of research.
An Unsustainable Trend: ResearchOps Without Researchers
Over the past two years, I’ve heard an increasing number of stories about entire research teams being laid off, and yet the ResearchOps function has remained intact, tasked with the job of enabling the rest of the organization to do what the research team used to do, and more. I’ve also heard stories of companies in which ResearchOps is their first research hire: they’ve skipped the researchers and gone straight to ops. These teams aren’t only being asked to democratize the doing of research, they’re also being asked to integrate AI wherever it makes sense (literally) and build insights traffic systems, a term I’m introducing here, defined as systems that enable research insights to flow through the organisation in the right cadence, format, and grammar so that the audience, whether product, design, marketing, or executives, can easily access and digest it. Shivanjali Mishra’s recent article, “The Systems Linguist: How Mapping Data, AI, and Language Builds Smarter ResearchOps,” captures this beautifully and is essential reading.
The vision of democratized research, combined with AI integration and insights traffic systems, is exciting—it is the future of research—but a world in which researchers are either not involved or have been reduced to a tiny team of usability testers (as if it’s 2010 again) is not sustainable. ResearchOps professionals are highly capable strategists, systems architects, and business analysts, but unless they come from a research background, they’re just not as equipped to make decisions about research strategy, methodology, or quality management: all key to successful research operations.
If you’re a research leader, this message is for you: Whether companies know it or not, they need you. And they need you to respond to the change in how they’re operating by building and operating research teams, or research capabilities, in entirely new ways, too.
So, what does that look like?
To understand how you should operate now, it’s useful to understand how research scaled during the 2010–22 tech wave and synthesize the lessons learned for rebuilding a more robust research capability in your organization today.
How We Got Here: The Distributed Growth Model
According to “The Crunchbase Tech Layoffs Tracker,” since 2022, a total of 509,000 tech jobs have been impacted in the US. The layoffs weren’t a blunt response to economic pressures—the NASDAQ index has never been higher—or the promise of AI. Instead, it’s symptomatic of the reshaping of how technology companies operate, which is reshaping how every person and team within them operates, from product to design, and research to ResearchOps. But what pushed tech companies to make such a significant employment correction—one that’s affected countless professions—and what does it mean for research?
Between 2010 and 2022, UX research teams ballooned off the back of well-funded growth in tech—and even in the odd government. Companies were focused on growing the size of their customer base, number of monthly active users (MAU), and even the number of employees. Interest rates were low, growth was paramount, and profit was second fiddle—and the talent market was highly competitive.
To support growth in customers and MAU, companies invested in user researchers: in simple terms, happier MAU equals more MAU, which equals growth, which equals happy investors. (You can say the same thing for AI companies today.) This dynamic isn’t likely new to you, but here’s the important bit: often, this hiring didn’t happen as a centralized effort, one in which the “company” or, more accurately, the company’s executive said, “Let’s build a user research team that helps us make decisions about critical business areas.” Instead, unknowingly riding the tech wave, the research team grew via distributed investment.
Here’s how it played out: a product or design manager realised that the amount of research they needed, often on pre-launch usability testing—let’s make sure we’re not launching a flop!—exceeded the number of hours they had available. So, they secured the funds to hire a researcher, either as a contractor or a full-time employee. The researcher focused their efforts on usability testing and, without the complexities of a scaled-up research department or too many rules, could often deliver insights fairly quickly without anyone else needing to lift a finger. So, the manager, keen to maintain this new superpower, hired the researcher full time and, soon enough, hired more researchers, putting the first researcher in charge. And just like that, the first researcher on the scene became a research manager.
Soon, other product and design managers, envious of their colleagues’ research capabilities, secured headcount to hire their own researcher. So they “flipped a headcount” to the research manager, on the condition that the researcher they hired would be dedicated to their specific team. Over time, the research manager became a manager of ten researchers, then twenty, then thirty, and, in some cases, a hundred or more researchers, most, if not all, acquired through flipped headcount. As the team grew (and as the notion of ProductOps, DesignOps, and ResearchOps became more popular), the research manager secured headcount for a ResearchOps professional tasked with making researchers’ work easier. In truth, these folk often acted more as research assistants than research system designers, making the research team even more expensive to operate with little measurable value delivered beyond the research team…unless they were put in charge of democratizing research. In this case, they were given a platform to showcase their skills as highly efficient, business-aligned enablers for hundreds of people—an important point in this narrative arc.
There are lots of ways this story can play out, but the central theme remains the same: instead of executive teams allocating a centralized budget to build a research capability aligned with its goals, and therefore geared to deliver executive-level value, product and design managers flip headcount one at a time, and, in doing so, fund the growth of a research capability without anyone necessarily being aware of the collective organizational investment. But the collective investment isn’t invisible, or small. It’s accurately recorded, down to the cent, in the company’s accounts against a line item labelled “research.”
The Hidden Cost of Distributed Growth

You might be surprised by how many research managers don’t know the total cost of their team to the business, or the average cost of each research study, which is a major managerial mistake. As a back-of-the-napkin calculation (corroborated in detail by Claude), a ten-person research team based in San Francisco costs between $2.2 and $3.2 million per year, depending on benefits. That’s peanuts when a company’s annual revenue is $53.439 billion (that’s Intel’s revenue in 2025; Intel also had the most layoffs in 2025). But in a cost-cutting, profit-focused context, it all adds up—and there’s one more bit of interesting maths that you should do. On average, a researcher can deliver two or three qualitative studies per quarter, which means that every research study costs between $20,000 and $30,000 to deliver.
The composite of these numbers, along with distributed investment, is where the rubber hits the road. If a research study or insight only delivers value within its immediate context and then disappears into thin air, from the executive’s elevated point of view, research is simply expensive vaporware—$3.2 million-per-year vaporware, to be exact.
When investment in research is distributed, and research outcomes aren’t constantly repurposed and redistributed across the company, fail to mimic the product development beat, or don’t hit the nail on the head for the highest-priority audiences, each stakeholder who flipped a headcount might know the value their researcher delivered, but no one else will.
This kind of spending may pass muster while the company is focused on growth rather than profit, but when the focus shifts to maximising profits, as it did in 2022, the executive will comb through the financial reports and find ways to cut costs. If they’re unable to communicate or point to the value a team delivers, or are convinced there are cheaper ways to achieve the same goal (say, by democratizing the effort or leveraging AI), that team will find itself on the chopping block. But if there’s a small team of business- and tech-savvy operators who are already enabling hundreds of people to do and consume research…well, we’ll keep them, thanks.
The suggestion here is not that researchers find a way to tie their value (or the insights they deliver) to the bottom line, as sales or manufacturing might. That’s a wild goose chase, and something I cover in detail in my book Research That Scales (see Chapter 2, “Lost and Won on Strategy”). The suggestion isn’t even that you must compromise and democratize research to make it seem like you’re delivering value—and save your job. Research is a cost center (a team that’s not expected to generate revenue directly), and there’s no need to pretend otherwise. But cost center or not, every team in an organization must operate in a way that makes its value, or the perception of its value, obvious to the executive, which requires being highly strategic about how you operate. In practical terms, this means you must have a research strategy, research operations strategy, and operating model that conscientiously balance distributed and vertical value.
If the terms “research strategy,” “research operations strategy,” and “operating model” have caught your attention but you’re not sure what they are or how to create them, I’ve literally written the book for you. The first four chapters of Research That Scales are dedicated to these concepts. Over the past eight years, I’ve run masterclasses with hundreds of research managers, and I can count on one hand how many of them had a research strategy and a resulting operations strategy (and operating model) that wasn’t happenstance. That’s a huge problem. If you’ve been able to deliver executive-level value without purposefully defining what you’ll do—and not do—(your strategies), and designed a model for how your organization should operate to achieve those goals (your operating model), you’ve been lucky, not smart.
So what does smart look like?
An Intentional Paradigm Shift: Building Vertical Value Into Your Operating Model
Sometimes the only way to grow a research team is through distributed investment, and this type of growth isn’t something to avoid. But if you’re a research leader, rather than continuing to grow a bigger and bigger research team that only delivers distributed value, you must, from day one, find ways to secure buy-in or find inventive ways to build vertical value into how you operate.
Vertical value delivers value vertically, as the name suggests—upwards through the hierarchical layers of the organization—ideally all the way up to the executive. While horizontal value often requires scaled-up operations and a lot of busyness to deliver value (lots of studies, logistics, stakeholders, and communications), in the world of vertical value, if you make the right choices, you need only deliver one to three perfectly aligned and articulated research studies or systems to be perceived as worth your weight in gold. Here, your operations aren’t focused on enabling quantity or pace. Instead, the goal is to become indispensable to senior leadership by consistently empowering them to hit the bullseye on million- or billion-dollar decisions, and, if possible, all the day-to-day decisions, too.
When it comes to approach, your inventiveness is the limit. But you’ll only succeed if you respond to the unique context you find yourself in. A paint-by-numbers, checklist approach simply won’t cut it—you’ll rarely find that kind of advice in my writing. And unless you work for an AI startup, this is not the era for grand visions that require significant startup investment. Instead, take a “LeanOps” approach and make the most of what you have. That said, let’s look at four key pointers for going vertical.
1. Align with Executive Priorities
As is likely clear by now, you must find ways to deliver research value that’s directly and unequivocally aligned with the executive’s priorities. Delivering “research value” needn’t mean delivering more research—it might, but that shouldn’t be the assumption. Instead, or in complement, you might provide a voice-of-customer report or access to a beautifully curated research library focused entirely on the executive’s priority: those new AI features for government customers, say. What you do is dictated by the needs and personalities of the people you’re trying to empower with knowledge. Again, the options are limited only by your ingenuity and budget.
I regularly hear a number of excuses for why this kind of alignment isn’t possible. To be blunt, most of it is procrastination. The most common things I hear: “I don’t have a seat at the table,” or “the executive hasn’t published a strategy, so I don’t know what their priorities are,” or “we can’t do this kind of work without additional funding, and getting the funding is hard.” Here’s how to handle these scenarios:
I don’t have a seat at the table. If you regularly and artfully communicate your achievements in the language of the executive, doing this kind of work will likely eventually get you a seat at the table because you’re working in alignment with executive priorities—by definition, this is what they care about the most.
The executive doesn’t have a strategy. You can find out the executive’s priorities by asking someone in finance where the executive is spending the most money. It’s a simple but highly effective hack.
We don’t have the funding. If you align with a chief business priority and can offer a compelling story of how you can help, I guarantee you that ample funding will be made available to support the right efforts. You may not secure millions right off the bat, but you will certainly secure enough to deliver a minimum-viable example from which you can grow.
In the past and on multiple occasions, I’ve used these tactics to secure headcount, build specialist teams, deliver global research systems, and secure access to innovative research tools.
2. Repackage and Redistribute Insights
If you’re a research manager, I’ve got news for you: you’re not a research manager, you’re a research services manager—managing researchers is just one part of delivering a knowledge or insights service; it’s not the core purpose of the role. Core to your role is finding ways to create, repackage, and redistribute insights—the same insights generated through distributed investment—so they’re relevant to more senior levels of the organisation. If you’re not able to do this, perhaps because the initial insights are too shallow to repurpose for senior management, reconsider how research is done and whether you can build this kind of knowledge capture into your workflows without slowing delivery.
Research knowledge management is a huge topic, and thankfully, there are now excellent resources that you should devour. Here’s a short list of must reads:
Research That Scales, Chapter 5, “Long Live Research Knowledge”
Stop Wasting Research by Jake Burghard
“The Systems Linguist: How Mapping Data, AI, and Language Builds Smarter ResearchOps” by Shivanjali Mishra is worth mentioning again
“Pragmatic Knowledge Management: From Scattered Insights to Serendipitous Intelligence” by Lilyth Ester Grove
3. Apply the Prioritized-Access Principle to Democratization
It’s common for research leaders to attempt to deliver vertical value, or scale up access to insights, by democratizing research. In other words, by enabling designers, product managers, engineers, and others to do research themselves. In principle, this is usually a good move, but democratization efforts are typically based on a significant strategic mistake, making them far less effective than they could be. That mistake? For-profit organizations are not egalitarian: not everyone’s need for access is equal.
Your operating strategy, including your strategy for democratising research, must acknowledge that there are high-risk or high-priority stakeholders or topics, less influential people or topics, and those whose input or research topic needs are inconsequential. Though all of these groups might want access to the capabilities for doing research (or getting research done), in a profit-oriented world, enabling equal access to everyone is unwise and inefficient. It uses precious resources to deliver horizontally rather than vertically oriented value.
To use a concert-going analogy, too many research democratization efforts either aim to give everyone the all-access pass experience or to give everyone, including those working on high-priority projects, general-admission tickets. Key to a democratization strategy is ranking the access needs of various people or teams, the risk or importance of their work, and the research capabilities that are already available within those teams. Based on that information, you can design a system that provides all access to some, select viewing to others, and to everyone else, access to general admission or even an invitation to observe from the sidelines—or online. This advice isn’t about being elitist; it’s about being strategic with limited resources to demonstrate measurable value where it matters most to the business.
4. Regard ResearchOps as Strategic Partners, Not Administrators
If you approach building research as building a research capability (not just a research team), you’ll likely build a vastly more diverse team than you might have done in the past. As a research services manager, your team will likely include highly specialised researchers—the type who can do the high-stakes, strategic research that no one else in the organisation can do—as well as systems designers, librarians (AI means that librarians are more crucial than ever), communications and data specialists, and, yes, administrators to keep everything moving.
To design, build, and maintain these systems, you’ll need a senior ResearchOps specialist, even as a consultant, in the senior ranks (use The Universal ResearchOps Career Ladder as a guide) to partner with you and help design the operating systems that will make your research strategy an operable reality.
A small, well-designed ResearchOps team can enable hundreds of people to learn about customers while ensuring that insights flow to where they’re most valuable. That’s the value proposition that keeps many ResearchOps functions intact even when research teams are cut. But this only works if ResearchOps is working in the service of a deliberate research strategy, not simply in reaction to distributed, egalitarian demands.
Nostalgia Is Not a Strategy
At the 2026 World Economic Forum’s Davos event, Canadian Prime Minister Mark Carney said in his seminal speech about the changing world order in politics and finance that “nostalgia is not a strategy.” His words rang true on so many levels, and they ring true in the corporate world, too.
The companies hiring ResearchOps without researchers aren’t making a mistake about the value of operations. They’re making a calculated bet that they can get customer insights more cheaply and at greater scale without a dedicated research team. In some cases, they might be right. But they’re also making a bet that they can democratize research quality, maintain research rigour, and build insights traffic systems without the craft expertise, strategic thinking, and quality control that skilled researchers bring.
They can’t.
The real question is whether research leaders will recognize this moment for what it is: not a crisis to weather, but an opportunity to redesign how research operates from the ground up. To build research capabilities, not just teams of researchers.
The old model of distributed growth and horizontal value is gone. If you’ve been given headcount and you think you’re reorganizing things back to what they used to be, I encourage you to rethink your position. The current hire-and-fire habits show that companies value operational capability and small teams that deliver outsized value. Research leaders need to design operating models that deliver vertical value from day one, and partner with ResearchOps professionals to do so. ResearchOps professionals need research leaders who understand this shift so they can build sustainable systems that scale the value of research—they must also leave logistics management behind and become research systems designers. When both roles recognize their interdependence and operate accordingly, as a partnership, that’s when research (not the team, but the capability that enables curiosity and knowing) becomes indispensable. That’s a future worth building.
Sponsor and Credits
The ResearchOps Review is made possible thanks to Rally UXR—scale research operations with Rally’s robust user research CRM, automated recruitment, and deep integrations into your existing research tech stack. Join the future of Research Operations. Your peers are already there.
Edited by Kate Towsey and Katel LeDu.
That is to say, SaaS and consumer tech are no longer sexy, but “new tech,” or anything to do with AI, is.
Banking, financial services, and insurance (BFSI) is an umbrella term for a broad range of institutions that provide financial products and services.
Interestingly, some researchers have reported that their jobs are becoming more generalised.






