January 20, 2026

Where Ideas Meet Action: How LSI Shapes Purposeful Leadership

LSI Fellows Catherine McCarthy and Michael Walsh are now actively collaborating to help combat health misinformation worldwide.

There is a special alchemy in the relationships formed among LSI Fellows. Beyond sparking ideas and creating close friendships, the LSI experience ignites action. Some LSI alums are already proving the power of Fellow partnerships by working together to create viable, real-world solutions to some of society’s most pressing problems. 

The former CEO of the non-profit Medical Aid Films and former Commissioning Executive at the BBC, Catherine McCarthy started her career as an educator before becoming a leader in using media and communications to create lifesaving digital health solutions, working in some of the world’s poorest countries. Former Managing Partner at Kilkenny Capital Management, Michael Walsh initially trained as a scientist before making a career in finance, helping fund biotechnology, pharmaceutical, and medical device firms. 

Both came to LSI in the autumn of 2024 with somewhat intersecting ideas of how they wanted to use their second chapters in life to serve society. 

In her application essay, Catherine wrote about the importance of access to health information worldwide. This had been a theme of her work for many decades and she knew she wanted to continue that mission, but in a new way. 

In his essay, Michael noted the estimate that approximately 300,000 Americans died as a likely result of COVID misinformation, identifying this as just the kind of public health issue his background suited him to contribute to. “My career was at the intersection of business and science,” he notes. “Being an investor in drugs and medical devices forces you to understand how every disease works and all  the treatments for them. It’s almost like going to medical school without learning how to diagnose and treat. I know the language.” He’d recognized that language was key, so he had at least some of the tools to foster better health outcomes through sharing what he knew. 

Today, Catherine and Michael are working together to explore a number of solutions to addressing health misinformation, including using cutting-edge AI-assisted chatbot technology to combat what they both identify as a most pernicious and deadly problem in our world: medical misinformation.  

Magic happens in the creative dynamics of a small, select, and diverse community

Two aspects of the LSI fellowship that particularly attracted both Catherine and Michael (its size and selectivity) proved essential to creating the spark that made this collaboration possible. 

Being in a relatively small cohort of proven leaders offered Catherine a jolt of energy and inspiration, “I’ve always been someone who enjoys working with other people,” she says, adding that traveling from the UK to Chicago offered an intriguing challenge and something fresh. “I’m British and a Londoner. I had never been to Chicago. So everything was very different.” It turned out she had something important in common with one of her Chicago-based Fellows. It didn’t take long to make that connection with Michael. 

Just a few weeks after arriving on campus, in October of 2024, their ideas met in a word cloud. “We were taking our first module of two courses on social entrepreneurship and the professor asked everybody just to give one or two words about the thing that they’re most interested in,” Michael explains. “I said medical misinformation and Catherine said disinformation; only two of us were in that part of the cloud.” A couple of weeks later, in another class, the two teamed up to create a model exploring how the mechanics of social ventures work. “I realized that she had knowledge and experience that I do not have and will never have. And vice versa.”  

The power of partnership and complementary expertise

A history professor and friend of Michael’s had given him a piece of advice he took seriously. “He said ‘be very careful getting yourself into something where you don’t have domain expertise. Because domain expertise takes five to 10 years to develop. And if you don’t have time, you’re 62 years old, if you’re going to spend seven years learning the domain before you can accomplish anything, you’ll be 70, you won’t want to do it anymore.’ As soon as I sat down with Catherine, I realized, oh, she has the domain expertise that I don’t.”  

Catherine is quick to point out which of Michael’s strengths complement hers. “Michael’s quite focused on developing the chatbot and the technical end, which is something I could never do. I’m not a techie.”  

“Each of us is necessary, but not sufficient,” says Michael, perfectly describing the humility that effective partnerships require. “Ever since our first conversation, every idea has been a joint one. I cannot point to anything that was my idea, or Catherine’s idea alone. It was just through the discussions that we came to what appear to be good conclusions.” 

As successful as they have been as individuals, it appears that their partnership is greater than the sum of its parts. “In the 14 months since Catherine and I started talking, not one person has been able to tell us why our approach to solving this problem is not the best one that they’ve heard,” Michael says confidently, adding “it doesn’t mean it’s going to be easy to implement.” 

In terms of implementation, Catherine describes their current state as one of research and development, with her focus now more or less on the former and his on the later. 

“Michael is focused on the development of this chat bot or video bot, which would link to a kind of front end, which is animated videos. I’m now focused on a piece of research that will explore how misinformation travels across the world. And it will give us the opportunity to test co-created content with community health workers and communities…the end game is to produce content that pre-bunks health misinformation and leads people to greater, deeper knowledge through a chat bot.” 

Here Catherine refers to her work with one of the world’s leading experts in what’s called attitudinal inoculation, or pre-bunking. This is a field of study the pair are using to guide their work.

Ongoing conversations with intelligent humans reveal the potential of AI-driven dialogue

Both Catherine and Michael are now in continual conversations with each other as well as with leaders in medicine, technology, and social research. What they are finding is that AI, if trained correctly, may play a crucial role in pre-bunking, debunking, and preventing the spread of misinformation. 

“There was a study out of MIT and Cornell in September of 2024, that sat 2,000 avowed conspiracy theorists in front of a curated chatbot that would allow them to discuss whatever their pet conspiracy theory was. And in one out of five of them, they were able to get those people to understand that their conspiracy theory was wrong,” Michael explains. “In all the research on disinformation, nothing has ever come close to that. Generally, the more you try to convince them that their ideas are false, the more you consolidate those very ideas in their head.”

This is the kind of study that is helping Catherine and Michael to develop their chatbot. Their hope is that their bot could compliment and support medical professionals. They are creating videos and interactive chat bots to assist the work of doctors who do not have the time to address the issue of misinformation. “One of my ideas coming into the LSI fellowship is that information needs to come from trusted medical messengers,” Michael explains. Through his own ongoing discussions with medical professionals, he has been able to home in on how this technology might be most effective. 

“Something like 15% of all clinical time is taken up with doctors re-educating people about things that are out there that are just not true,” Michael explains. “This is a worldwide problem. So, we’re talking about changing medical education. Not of doctors, but of patients. We have to employ the entire medical establishment and get them to buy into the idea that something has to change.”

That buy-in will depend on answering the kind of in-depth questions both Michael and Catherine are asking, including the pivotal questions around the role of AI. 

“There’s a study out of the UK that showed that even though the LLMs can answer medical questions better than doctors or better than medical students can, when you put those same tools into non-expert hands, they get the wrong answer two out of every three times. That’s the most profound bit of information I could have ever seen. What it means is there’s nothing wrong with people going to ChatGPT…if they already speak the language.”

Here, Michael returns to his original thesis: this is a language problem. “We need to build a user interface that will help people turn their general medical questions into specific, answerable queries for a general chatbot, and then generate answers the user can understand at their level of medical knowledge. And that is a definable product. Without that insight, we had no idea what our product would actually look like. Now it’s pretty simple and it’s very doable.” 

Michael describes their proposed product as a three-legged stool. “First, it’s a generated video on the front end to bring people up to speed about a particular medical topic, then the middle piece is this chatbot that allows them to have a discussion about whatever they’re interested in, translated into medical-speak.” Lastly, the videobot (the combination of video and chatbot) will be distributed by trusted medical messengers, such as doctors and nurses. Then comes perhaps the most interesting and novel part: how an AI-powered chatbot might allow users to pose questions they might not feel comfortable asking a medical professional, including potential misinformation they may be confused about (or even dedicated to believing.) If this bot were trained like a doctor—on the actual science—how might health outcomes improve? 

Through the power of their partnership, these LSI Fellows are bringing us closer to answering this important question.

What questions might you want to help answer in your next chapter?

This piece was developed in partnership with ROAR Forward. Learn more about ROAR Forward here.

Category
Fellow StoriesPlanning Your Next Chapter