Statistician 2.0 --- Statistics and Statistician in the AI/ML era
Interview with Xiao-Li MENG (Harvard) by Xun Chen (AbbVie)
Highlights:
• Learn how statisticians can leverage their rigorous training and critical thinking to carve out a distinctive edge in interdisciplinary teams and high-impact projects.
• Explore the deeper value of advanced statistical education—what skills truly matter, and how students can future-proof their careers by focusing on the right capabilities.
• Gain insights into how statisticians can proactively drive scientific innovation —and what the rise of AI means for traditional academic paths and tenure-track expectation
Note from editor: Xiao-Li Meng is the Founding Editor-in-Chief of Harvard Data Science Review, faculty co-director of LabXchange and the Whipple V. N. Jones Professor of Statistics. He is renowned for his extensive research, innovative teaching methods, visionary administration, and engaging speaking. Meng was recognized as the best statistician under 40 by Committee of Presidents of Statistical Societies (COPSS) in 2001 and has received numerous awards for his over 150 publications across various theoretical, methodological, pedagogical, and professional development areas.
In 2020, Xiao-Li Meng was elected to the American Academy of Arts and Sciences. He has delivered over 400 research presentations and public speeches. His writing, including the popular column “The XL-Files” in the Institute of Mathematical Statistics (IMS) Bulletin, is celebrated for its clarity, wit, and thoughtfulness.
Xiao-Li Meng's interests encompass the theoretical foundations of statistical inferences, including the interplay among Bayesian, Fiducial, and frequentist perspectives, and frameworks for multi-source inferences. He is also focused on statistical methods and computation, such as posterior predictive p-values, the EM algorithm, Markov chain Monte Carlo, and bridge and path sampling. Additionally, Meng applies complex statistical modeling across various fields, including among others astronomy, mental health services, and genetic studies.
Xiao-Li Meng earned his B.Sc. in mathematics from Fudan University (1982) and his Ph.D. in statistics from Harvard (1990). He began his academic career at the University of Chicago (1991 to 2001) before returning to Harvard, where he served as Chair of the Department of Statistics (2004–2012) and later as Dean of Graduate School of Arts and Sciences (2012–2017).
Xiao-Li Meng is widely recognized for his deep and wide-ranging contributions to statistics and data science. He has helped shape the field through both scholarship and leadership.
Xun Chen is the Vice President and a Global Head of Data and Statistical Sciences at Abbvie. In her current role at Abbvie, Xun Chen leads the statistical strategy and execution across all clinical development programs, supporting a diverse portfolio of successful therapies in oncology, immunology, rare diseases, diabetes, and cardiovascular disease.
Xun Chen, who received her PhD in Biostatistics from Columbia University, is a passionate advocate for statistical leadership in drug development. She led the successful buildout of a comprehensive clinical sciences and operations platform in China (2010–2015) and is widely recognized as an industry thought leader through her contributions to major biostatistics consortia. Xun Chen served as President of the International Chinese Statistical Association (ICSA) in 2024. Her research spans key areas including multiplicity adjustment, missing data, adaptive design, multiregional trials, and Bayesian methods.
Building on her commitment to advancing the field, Dr. Chen recently sat down with Prof. Xiao-Li Meng for an in-depth conversation on the evolving role of statisticians in the pharmaceutical and biotech industries. In a time of rapid scientific and technological change, she emphasized the importance of fostering new mindsets and a data-driven culture to develop future leaders. We’re grateful to share this insightful interview with Biopharmaceutical Report readers and invite you to explore the ideas it brings to light.
Xun CHEN: Thank you, Xiao-Li, for joining me today to discuss the future of Statistics and Statisticians in the era of data and digital transformation.
The pharmaceutical industry is undergoing a digital transformation driven by emerging technology, data proliferation, and artificial intelligence (AI). The role of advanced data science capability has significantly expanded within the biopharmaceutical industry. This shift brings forth unprecedented opportunity to improve insights and data-driven decisions. Statisticians in the pharmaceutical industry, however, once regarded as the 'stewards of sound thinking for good decision-making,' are now often perceived as 'obsolete' in the public eyes in the new data era. There have been increasing calls for statisticians in the pharmaceutical industry to evolve in recent years.
This imperative has also been recognized within academia. As highlighted in last year’s fireside chat, a central theme among participating professors was the evolution of statistical training to effectively support and engage with diverse fields of practice
With the growing call for 'Statistician 2.0’ in the AI/ML era, what’s your take on it?
Xiao-Li MENG: Thank you, Xun! The fireside chat on AI that you mentioned will appear in the upcoming April issue of HDSR (https://hdsr.mitpress.mit.edu/pub/a7kmqk35/release/1?readingCollection=da931fd2). Interestingly, there’s also another article, written independently by a separate group of Statisticians, expressing very similar concerns. Both pieces are from academic perspectives, as you noted, and I can certainly relate to your observations about the pharmaceutical industry.
One thing probably is all clear is that few of us worry about statistics is going to be obsolete. Much of what practitioners in machine learning do is grounded in statistical thinking. They use statistics either in ways we don’t commonly use, or sometimes without realizing they’re applying well-established statistical methods. Take A/B testing, for example. It’s widely used, but as statisticians, we’ve developed far more sophisticated approaches, like factorial designs. The real concern, which I completely understand, is what the future is for statisticians.
If we stay within our traditional role, which is typically analyzing data using standard statistical modeling techniques, we certainly have a very strong competitor, in this age of days. In fact, at the large scale, large language models (LLM) clearly have a far greater impact, whether we like it or not. The rise of AI has shown us something important and I’ll admit to anyone that we statisticians probably would never come up with the idea of LLM. And even if we had, we probably never would be able to implement or popularize it on the same scale as computer scientists can. Therefore, we definitely need to reflect on the limitations of our field and consider how we might evolve.
At the same time, I also believe that every field has its own boundaries. That’s why I often emphasize that science is not a single, unified discipline. For example, you can be a top physicist, but that doesn’t mean you can solve complex problems in biology - you still need a biologist. Even though both are scientists, their expertise is domain specific. Similarly, as statisticians, we shouldn’t claim that everything falls under statistics because that’s clearly not true. And if we think that way, it's not going to be effective. The truth is, we’re not trained to do everything others do, and frankly, some of us may not even enjoy it. For many statisticians, the idea of mindlessly searching for patterns without understanding them can feel beneath their training. But there are others who have no problem with that approach and embrace an engineer-like mentality. Engineers often operate with the belief that I can make it work, even if I don’t fully understand why right now. They iterate, try things, and build solutions that may not be optimal, but they get things done and create something tangible that others can see and use.
As statisticians, we tend to think from the very fundamental point, which is that we like to understand the 'why' behind things. Even when we produce a result or a product, we want to evaluate it rigorously and understand what’s working, what’s not, and why. That mindset is incredibly valuable. At the same time, when it comes to our role in data science, I believe statisticians should be at the core, but not necessarily the sole leaders. Instead, we should view ourselves as co-leaders. It’s like a center with two directors – one is a statistician, the other is a computer scientist. Each brings a complementary perspective, and together they provide joint leadership.
I've worked with a variety of people, including scientists and social scientists. Often, they come to me and say, "Xiao-Li, I don't need you to teach me the basics of statistics. I can handle that myself, and my students can too. What I really need from you is to help me understand when not to use certain methods. What are their limitations, and when can they be dangerous?" That's the usual thing that takes the most statistical insight.
So, one important role we can certainly play is by reviewing what's already been done, which is what I'm currently exploring with large language models. I'm trying to identify areas where people struggle, and as statisticians, we can step in to offer solutions. We don't always need to invent new methods. Sometimes, it's about applying what we already know. For example, Bayesian thinking and uncertainty quantification are core to our training – and they are certainly not new - but they may not be as familiar to those focused purely on algorithms.
I've seen people try to use a kind of pseudo-Bayesian approach. They know they need to combine prior information with data, but the way they do it by averaging, for instance, can be very problematic. As statisticians, we would look at that and say, "Wait, that’s not the right way to do it." There's a whole framework like Bayes' Theorem that they might not be using properly. We know how to propagate and combine information in a more robust way.
So, I think statisticians can really help others save time by guiding them through these challenges, helping them avoid pitfalls, and applying proven methods to make their work more effective.
I believe there’s one major area, one big direction, where we can now play an increasingly important role, and where people are more willing to listen to us. When we look at the current state of general AI and large language models, much of it is still driven by brute force. They are trained on massive datasets with enormous number of parameters, relying on extensive computing power and significant human labor. It's essentially a proof of concept that this kind of massive training and fitting approach can work.
But now, there's growing recognition that this brute-force method isn’t sustainable. It consumes immense amounts of energy and resources. As a result, people start to ask - What’s a better, more efficient way to make things more optimal?
That’s where statistical thinking, especially Bayesian thinking, becomes essential. It’s like the difference between doing targeted probabilistic calculations and running endless simulations. If you had infinite resources, you could simulate everything and hope to find the right answer. But in practice, that’s inefficient. Instead, we can use theoretical calculation and probabilistic reasoning to narrow down the space to focus on what’s most likely and avoid wasting time and energy on the improbable.
I think one good example we statisticians should reflect on is the DeepSeek model. Remember how shocked the market was - how could it perform so well with seemingly so little? To me, that wasn’t surprising. The success wasn’t necessarily about doing more with less - it was about doing better with thought. Prior approaches relied heavily on brute force: massive datasets, huge parameter spaces, and enormous computational resources. That kind of race tends to incentivize massive experimentations than deep contemplation.
When you have enough resources, you tend to rely on brute-force methods—running all kinds of powerful simulations. But then someone steps in and says, “Wait, we can do this more efficiently.” And suddenly, you achieve substantial gains, not by scaling up, but by thinking differently. Now, what we’re seeing is a shift. With the global race among companies and nations, people who understand models more deeply, who can reason about structure, penalization, trade-offs, etc, are becoming increasingly more valuable.
And this is where theoretical thinking matters. Not necessarily mathematical in the formal sense, but conceptual. As statisticians, we understand ideas like the bias-variance tradeoff. We know you can’t minimize both simultaneously, so we don’t waste time chasing the impossible. But someone without that training might spend ages experimenting, only to arrive at that realization the hard way. We can help shorten that learning curve.
But to be effective, we need to speak their language, literally and conceptually. Otherwise, we’ll be sharing valuable insights that no one can apply because they don’t understand the framing. That’s why I really appreciate seeing students today diving into machine learning. When they come back to classical statistics, they often realize ---Oh, this is just a formalization of what we've been doing intuitively. That connection is powerful.
I believe there’s so much more we can contribute than we often realize. But to do so, we need to adapt. For example, I’ve been telling my department not to spend an entire semester teaching linear regression. There’s so much more we could be teaching that it would better prepare students for the real-world challenges.
Xun CHEN: There are a lot of great points. I have several questions I'd love to discuss with you further.
Your insights on the value of statistical thinking truly resonated with me. Could you elaborate on how statisticians can leverage such unique training and experience to distinguish themselves at work?
Xiao-Li MENG: Let me give you a very concrete example which I may talk about during my visit to Maryland in September. There’s a major area in machine learning known as 'divide and conquer' or, more generally, distributed learning. The idea is straightforward - when you have too much data to process at once, you break it into smaller chunks, analyze each part separately, and then combine the results.
Now, here’s where the difference between deep statistical thinking and treating something as just an algorithm becomes evident. Many practitioners simply average the results from the different subsets. But a statistician, trained in concepts like likelihood and sufficiency, would immediately recognize the potential pitfalls of that approach. Averaging estimators can lead to a terrible, biased result. This has been seen in distributed regression to run regressions separately, average them, and you end up with a highly biased estimator.
A statistician would say: “Wait, you’re combining the wrong things.” Instead of averaging the estimators, you should be combining the sufficient statistics, like the cross-product terms in regression (i.e., the numerator and the denominator of the slope estimator). If you aggregate those, and then compute the estimator, you get the same result as if you had fit the full model on the entire dataset. Same computation, but much more efficient and statistically sound.
This is the power of statistical thinking. I’ve seen machine learning researchers go to great lengths to prove theoretically how to combine estimators, when in reality, the principle of sufficiency, something every statistician learns early on, already provides the answer. The concept of sufficiency may be a little foreign to some in the machine learning community, but it’s not beyond their reach. They can learn it if we teach them. The issue is, we haven’t been teaching it in a way that connects with the way they work or think.
Statistics has always been about extracting as much insight as possible from limited data. Historically, we didn't have the luxury of big data. That constraint forced us to think deeply and develop powerful, efficient methods. This is actually our strength. Imagine if computer science had been developed long before statistics - everything might have been brute-force computation, with little incentive to think critically about information and efficiency.
Now, ironically, even as we deal with massive datasets, the need for careful, efficient thinking is resurfacing. Companies are realizing how costly brute-force approaches are after investing heavily in building their data centers. Now, tools like DeepSeek are showing real promise, revealing just how much more we can achieve. As we face deeper and more complex problems, we’re starting to lose clarity and even information. That’s where statisticians can and should step in, because we know how to extract meaningful insights, even from limited or very noisy data.
But here’s the challenge: when results are driven by brute-force methods, and shiny products are produced quickly, people assume that’s where the value lies. They don’t always see how inefficient or wasteful the process was. As the cost of data processing becomes more visible, people are beginning to ask, 'Can we do better?' That’s our opportunity. We need to show that we have tools and thinking that can lead to more efficient and interpretable solutions.
However, it’s not just about claiming territory. If we come in simply to say, 'This is our territory,' it will backfire. We need to collaborate in a way that adds value. That’s the hard part. People naturally ask, 'Why do we need statisticians? They don’t build products.' But the truth is, we can make those products better, smarter, and more efficient. We just need to approach it with humility, clarity, and a spirit of partnership.
Xun Chen: The power of statistical thinking! That’s truly fascinating, Xiao-Li. In practice, we know, however, it’s not uncommon for statisticians with advanced degrees — those who excel in exams and complex problem-solving— to struggle with grasping the broader context and deeper implications of statistical thinking. I used to be one of them. It took me years at work to develop the ability of deeper, intuitive statistical thinking.
What do you believe to be the true value of additional years of advanced statistical training? Specifically, what knowledge and skills should students pursuing a Ph.D. in statistics consciously develop and enhance?
Xiao-Li MENG: You’ve pinpointed something very important, and I’d like to respond just as concretely. To me, the key difference between a master’s degree and a PhD is this: at the master’s level, you acquire practical skills and learn how to do things; with a PhD, of course you also learn how to do things, but more importantly, you learn why we do them, and when we shouldn’t.
If you think about it in terms of business value from a startup’s perspective - a Master can help you build a product and get something off the ground. A PhD, assuming they also have practical skills (and that’s important - there’s a common criticism that some PhDs focus too much on theory and not enough on application), can help make that product optimal and competitive.
Anyone can create something these days, whether it’s using ChatGPT or building an app. But what makes one solution better than another? That’s where deeper thinking and analytical rigor come in. That’s the value a PhD can bring to elevate something from functional to exceptional.
And when I talk about being competitive, I mean more than just technical excellence. This is why I believe we need to think about data science very broadly. It’s not just statistics or computer science. It also includes understanding people, communication, marketing and operations. Building something is just the start - developing it, deploying it, and making it impactful require a broader set of skills.
So if I had to put it in concrete terms – a Masters gets you started and a PhD helps you to optimize.
Lately, I’ve been reflecting on the broader landscape of General AI. Computer scientists have done an impressive job initiating the field, including demonstrating the possibilities, inspiring innovation, and getting society genuinely excited. As we move toward the next level of development, I believe we, as statisticians, should be co-pilots in this journey.
When you look closely at the deep thinking happening in computer science and machine learning, you’ll find that much of it is grounded in statistical and probabilistic reasoning. These researchers may not always call it statistics, but they’re using many of the core ideas we’ve developed by applying through their own lens. They have a key advantage: by starting with implementation, they quickly realized the need for optimization and deeper theoretical grounding. In doing so, they’ve become eager students of what we already know.
In contrast, statisticians often begin from a different place. We focus on understanding how to do things before we actually build them. While this gives us depth, it can put us at a disadvantage position when it comes to implementation, especially in areas like managing large-scale databases or deploying models at scale. Many of us, even or especially with strong theoretical training, lack hands-on experience in handling massive datasets or infrastructure-level work. That’s where collaboration becomes essential.
We need stronger communication and partnerships with computer scientists. Realistically, when top-level AI researchers need help, they’re unlikely to turn to entry-level statisticians or master’s graduates for basic tasks, because those are skills computer scientists often possess themselves and may even execute more efficiently. But when they encounter deep statistical challenges - questions that require critical thinking, modeling expertise, and theoretical insight - that’s where PhD-level statisticians can and should step in, at exactly the level where they add the most value.
Xun CHEN: You are spot on again, Xiao-Li. In today’s rapidly evolving landscape, merely knowing how to apply statistical methods is no longer sufficient. With the proliferation of alternative digital tools and quantitative methodologies, and the continual emergence of new ones, it’s essential to move beyond traditional practices. Adhering to statistical methods solely out of tradition or regulatory mandates will not succeed. Statisticians in academia, industry, and regulatory bodies should collaborate to proactively advocate for the core value of statistical thinking and embrace new data sources and methodologies, ensuring that statistical insights remain integral and complementary within the broader data science ecosystem.
I remember a paper you featured early on in HDSR, comparing predictive models and inferential models (https://hdsr.mitpress.mit.edu/pub/a7gxkn0a/release/7). That duality is key. We need to help the broader community understand that it’s not either/or. On the one hand, we must embrace the usefulness of black-box models when they perform well. On the other hand, we need to stay vigilant about the risks they pose and develop strategies to mitigate those risks.
So rather than waiting for something to go wrong and then fixing it, how can we more proactively navigate the advancement of science and technology?
Xiao-Li MENG: Yeah, that’s a great question. I think there’s an easy answer and a hard one.
The easy answer is humans are actually very good at using black boxes. We do it all the time. I use my computer every day without really understanding how all the hardware works. Most people drive cars without knowing exactly how the engine functions and that’s fine, because we know enough not to do anything reckless. We don’t pour water on a laptop. We don’t put gasoline in the wrong part of the car. So, at a broad level, black boxes themselves aren’t the issue. People often feel threatened by them, which I understand, I have my own concerns, but we shouldn’t have fear for them just because we don’t understand every part.
What we should be cautious about is the scale and speed at which these black-box systems can operate, especially things like general AI. In daily life, we learn through trial and error. You misuse an appliance, it might cost you money or cause a minor injury, but you learn from the experience. However, with powerful AI systems, we often don’t get a second chance. Mistakes can happen instantly, at massive scale, and with consequences we can't reverse. That’s the real risk.
So how do we address that? I think we need to take a cue from the lab sciences. Anyone who’s worked in a chemistry or biology lab knows that one must follow strict safety protocols. Most of the time, those measures might seem excessive, but they exist to prevent rare, potentially catastrophic events. Over time, this becomes part of the lab culture. We need a similar cultural shift in how we handle large-scale, high-impact technology. That’s where statisticians have a critical role to play in ensuring due diligence. We should be embedded in the process as quality control experts, not just after the fact, but from the beginning. I was once invited by the U.S. Census Bureau to serve as a quality control expert. At first, I thought I’ve never done anything like that. But then I realized that they were right about the role I can play. They’ve got economists building the models, but they need a Statistician to evaluate whether what they're doing is legitimate.
In fact, as we build powerful systems, we should also build defense systems in parallel. It’s like developing missile technology. If you build offensive capabilities, you must also develop anti-missile defense systems. Otherwise, you're vulnerable. That same logic applies here. Alongside building black-box tools, we need to build counter-tools, mechanisms to detect, audit, interpret, and safeguard.
Statisticians are uniquely positioned for this. We bring more insight than simply relying on brute-force trial and error. We are the professionals entrusted with the role to do quantitative thinking with variations. Variability is not just noise, it’s where information lives. Unfortunately, we're often viewed only as the people who talk about uncertainty, which gives us an image problem. People think of us as the ones who raise doubts and create complications.
But in reality, we are information experts. We understand signals and noises. We think about everything together, including how data behave, how to extract meaning, and how to build robust systems. Sadly, much of the credit for ‘signal processing’ has gone to engineers. The ‘product building’ is credited to computer scientists. And statisticians are seen as the ones who slow things down by worrying about uncertainty. That’s a false narrative.
Our role should be present in all those areas - signal, noise, and everything in between -they're fundamentally part of our domain. So, I believe one of our key responsibilities is not only helping to build the product, but also to build the counter-product alongside it.
This also brings us back to the issue of training. It may not be realistic to expect single individual to master everything. That’s why I’ve always been cautious about the idea of defining data science as a single, standalone discipline, and building a department of data science, as I wrote in the inaugural editorial for HDSR, I don’t think that model reflects the complexity of the field. Even within statistics, expecting a PhD student to be trained to do everything, from deep theory to full-stack implementation, isn’t always feasible.
What this really points to is the need for building strong, interdisciplinary teams. A company, for example, should hire a mix of people: PhDs in statistics, master’s-level statisticians, computer scientists, and others with complementary skills. But don’t place them into separate teams. Instead, put them on the same team. Let them work together, build a language, and develop mutual understanding. That’s how we learn from one another.
To me, that’s what data science is all about - not everyone doing everything, but people with deep expertise in one area who also have working knowledge across others, all brought together by a shared focus on solving real problems by learning from data.
If a company wants to grow data science capability, I’d actually recommend not starting by hiring people just because they’re labeled 'Data Scientists.' Often, they may not have the breadth or depth you expect. Instead, hire people with clearly defined, strong skill set in specific areas - statistics, computer science, domain knowledge - and form a unified team around real problems. Let them build and grow together.
Whether or not you call them 'Data Scientists' doesn’t matter. What you’ll have is a true data science team and that’s far more powerful.
Xun CHEN: Yes, that’s a great point. I’ve been thinking we might benefit from building a more hybrid talent pool. It could be valuable to bring together a mix of backgrounds, PhDs, master’s-level professionals, and people with training in statistics, data science, and related fields. That diversity could really strengthen the team.
Xiao-Li MENG: Right, and really building a true data science team.
Xun CHEN: Exactly. Now that you've mentioned co-leadership, I'm curious about how this works in academia. In industry, for statisticians to stand out on a cross-disciplinary team, communication skills, the ability to influence, and the capacity to collaborate effectively are just as important as technical skills. Does this shift in thinking imply something different for those pursuing academic careers, or are they still bound by the traditional tenure track expectations, where publishing papers is the primary focus?
Xiao-Li MENG: Right! You've touched on something really crucial and genuinely difficult. This issue has a long history. In academia, especially in the mathematical sciences, which includes people like me, we’ve been trained, valued, and rewarded based on our individual contributions. We’re not typically trained or incentivized to think in terms of contributions to a team. That’s a deep, systemic challenge, because the reward structures haven’t evolved to support collaborative work.
One of the biggest challenges in promoting people was evaluating their contributions in massive, team-based projects. In our traditional model, especially in fields like Mathematics, papers are often single-authored or have just a few co-authors. We're not used to seeing names on papers with hundreds of contributors, like in physics where some publications list a thousand authors. So how do you assess individual value in that context? It really calls for a fundamental shift in academic culture.
But I do think that shift is already happening, especially when I look at my own students. Fifteen years ago, almost all my students would’ve followed a path similar to mine to become professors. They weren’t thinking about industry. But today, the majority go into industry. That tells me something important that students are signaling that the landscape is changing.
When they go into industry, they’re not expecting recognition in the form of academic fame. They’re not thinking, “This is going to be Xiao-Li’s paper” or “This product will have my name on it.” Instead, the reward systems are different. Of course, compensation is a factor obviously, but so is the opportunity to work on complex, high-impact problems. The mindset is entirely different.
I was just talking with the President of a French university this morning. He was visiting us to discuss AI. I told him, “You’re in a position to make real change.” Society now sees how much value and power the Tech industry can generate. Traditionally, major scientific and technological advances started in universities and were later translated into industry. But that’s no longer the case. Deep learning, for instance, has largely emerged from industry, because academia simply can’t compete on that scale. We don’t have the data, the computational resources, or even the manpower.
So what we need now is a new kind of entity - a hybrid model that brings together the strengths of both academia and industry. Industry brings speed, scale, and resources. Academia brings rigor, deep thinking, and a vast knowledge base. There’s so much potential in that kind of partnership. Maybe it’s a think tank, maybe it’s a new research institute, but it has to be something new, built for this era, where both sides contribute as equal partners.
And this is exactly where we're starting to train the next generation. In the end, the concept of a traditional degree will probably continue to exist, but I wouldn't be surprised if we eventually see the emergence of entirely new kinds of degrees. Right now, we have academic degrees like PhDs, as well as a range of professional degrees. But perhaps there should be a new kind of recognition, something that signals not just depth in a field, but a broader, integrative knowledge across disciplines.
We’ve been talking for years about interdisciplinary and multidisciplinary training. Some now use the term transdisciplinary. But I think we’re heading toward something even more transformative, not just combining disciplines, but organizing around problems rather than fields.
Take climate change, for example. It's a massive, complex issue that spans science, technology, policy, economics, ethics, etc. And it’s becoming increasingly political. You could imagine building an entire educational and research structure focused on that one grand challenge. Students, faculty, and professionals wouldn’t be organized by department or discipline, but by the shared goal of solving that specific problem. It would be more than a think tank. It would be an action tank, with structure, collaboration, and implementation all built in.
The way we currently structure knowledge, whether in industry or academia, reflects an old model of division of labor, which made sense historically. But today, the increasing need for integration suggests that model no longer serves us well. We may be headed toward a reorganization by not just bringing disciplines together to create new disciplines but going beyond that. A model where disciplines dissolve into new ways of thinking and doing.
I don’t know exactly what form this will take, but I believe it’s already happening organically. What’s emerging may be more fundamental than just merging fields - it’s about reshaping how we define knowledge, contribution, and collaboration. That’s the big picture I’m currently seeing.
Xun CHEN: That’s really great, Xiao-Li. I’ll summarize the key points you shared today and let‘s see how the discussion evolves in the next round.
Xiao-Li MENG: Absolutely, I'd love to work with you on this. Once you have a summary, please send it to me. I’d love to build on these notes and develop ideas further. There’s a lot to learn here. What I aim is to bring in different voices. That way, we’re not just sharing ideas, moreover we’re gathering reactions and building momentum.
Xun CHEN: Fantastic! Thank you, Xiao-Li!



Great insight!