How AI Is Disrupting Our Industry, and What We Can Do About It – Reggie Townsend

An Interview With Cynthia Corsetti

Establish Oversight and Governance: Set up robust oversight mechanisms early in development to ensure accountability and ethical governance.

Artificial Intelligence is no longer the future; it is the present. It’s reshaping landscapes, altering industries, and transforming the way we live and work. With its rapid advancement, AI is causing disruption — for better or worse — in every field imaginable. While it promises efficiency and growth, it also brings challenges and uncertainties that professionals and businesses must navigate. What can one do to pivot if AI is disrupting their industry? As part of this series, we had the pleasure of interviewing Reggie Townsend.

Reggie Townsend is the VP of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human well-being, agency, and equity to meet new and existing regulations and policies. Townsend also serves on the National AI Advisory Committee (NAIAC) and several other boards promoting trustworthy and responsible AI, combining his passion and knowledge with SAS’ nearly five decades of AI and analytics expertise.

Thank you so much for joining us in this interview series. Before we dive into our discussion our readers would love to “get to know you” a bit better. Can you share with us the backstory about what brought you to your specific career path?

My interest in data ethics and AI came from a mix of curiosity and concern. As a technologist, I’ve always been fascinated by the evolving impact of AI on various industries. Several years ago, I had a conversation with a colleague about responsible AI that prompted me to dive deeper, and that’s when I realized how using past data to shape future structures could perpetuate inequalities, particularly for disadvantaged communities.

This led me to immerse fully in this topic and advocate for ethical AI practices. Today, as Vice President of Data Ethics at SAS, I focus on building a more inclusive and equitable future through the responsible innovation and use of this technology.

What do you think makes your company stand out? Can you share a story?

SAS is a nearly 50-year-old company with a long history of innovation, and that continues today at the cutting edge of AI. We have award-winning technology built around trust and trustworthiness, which is what I focus on at SAS.

But I think our people-first approach and commitment to responsible innovation are what set us apart from the pack. Our CEO Jim Goodnight along with the leadership team, did a great job of nurturing a positive culture from the start, and fostering great talent. We’re fortunate to have a team of individuals who want to do good, and that culture often leads to further positive outcomes within SAS and beyond.

Our Data Ethics Practice is a strong example of that commitment to employee development and responsible innovation. What began as informal discussions among a small group of colleagues quickly evolved into an official initiative addressing critical business, industrial, and social issues stemming from AI development. The potential risks of AI, particularly in areas such as public safety, health care and finance, prompted ethical discussions and inquiries. This helped us formalize our efforts and develop a comprehensive business case. Thanks to the support of our executive team, we got the Data Ethics Practice up and running in a short time.

That is a good example of the SAS corporate ethos. We believe that trust is the currency of innovation, and our commitment to ethical practices reflects our dedication to building a better future for all.

You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

For me, I think of four instrumental traits as part of a framework I call “professional principles.”

The first is building trust by establishing genuine relationships. By demonstrating integrity and reliability, you can cultivate trust among colleagues and stakeholders that is reciprocated and drives team success.

The second principle is embracing vulnerability. Recognizing and acknowledging my own limitations while also being open to new ideas has fostered an environment of authenticity within my team and creates space for creativity in our innovation.

The third principle is understanding situations. By actively listening and empathizing with those involved, I’ve personally gained valuable insights into complex issues and can approach them with a more well-rounded perspective.

This leads to the last principle, which is taking purposeful action. Empathetic understanding allows me to take purposeful actions that actually target the underlying needs and result in real change.

Let’s now move to the main point of our discussion about AI. Can you explain how AI is disrupting your industry? Is this disruption hurting or helping your bottom line?

In this industry, AI is causing significant disruption by democratizing access to the kind of tools we’ve traditionally offered as part of our business model as SAS. The rise of open-source platforms has intensified competitive pressures, prompting AI and analytics providers to rethink our strategies and partnerships in this space. While the accessibility of tools on platforms like GitHub is a boon for innovation, it also means we have to work harder on cohesion and centralization to ensure responsible use.

To navigate this landscape, we’re focused on fostering better partnerships and providing a measure of governance through our platform for data scientists. This approach not only keeps us at the forefront of the AI space but also ensures that the tools are deployed ethically and effectively.

AI’s emergence has introduced a new risk-reward dynamic that organizations must contend with. As automated decision-making becomes more prevalent, we must scrutinize the criteria behind these decisions to prevent past biases from being perpetuated. By addressing these challenges head-on, we can harness the potential of AI to drive positive impacts while mitigating potential drawbacks.

AI disruption presents challenges, but it also offers opportunities for innovation and growth. By adapting to these changes and prioritizing responsible AI practices at the onset, we can use AI to improve efficiency and fairness while creating real value for our customers.

Which specific AI technology has had the most significant impact on your industry?

Generative AI has had a significant impact, particularly due to the proliferation of deepfake content, which has brought attention to the challenge of maintaining trust in visual and auditory information.

However, generative AI is not just a question of output. Synthetic data generation, a form of Generative AI, supplements incomplete data sets with new data spun from existing sources, filling gaps, and addressing historical limitations. This capability has the potential to revolutionize fields such as drug development, particularly for rare diseases where data is limited, by facilitating quicker testing and validation of drugs before they reach the market. It’s for reasons like this that Generative AI has far-reaching implications for every kind of industry.

Can you share a pivotal moment when you recognized the profound impact AI would have on your sector?

Each time I encountered a new development or application of AI across an array of use cases, I could feel a sense of realization and urgency. It became increasingly clear that AI was not just another technological advancement but a paradigm shift that would fundamentally reshape our society.

I became more aware of the need for proactive and responsible approaches to AI adoption. It wasn’t just about embracing new technologies. It was about ensuring that these technologies were deployed ethically, equitably and with an understanding of their broader implications.

How are you preparing your workforce for the integration of AI, and what skills do you believe will be most valuable in an AI-enhanced future?

As AI continues to become integrated into the workforce, it’s essential to recognize that this transition represents a significant societal shift, similar to historical shifts in labor practices such as the Industrial Revolution. Just as the advent of computers and the internet required a reconfiguration of work processes, the rise of AI necessitates a similar adjustment.

In the short term, our focus lies on establishing a baseline literacy in AI across the workforce. This entails ensuring that employees understand the fundamentals of AI technology, its applications and its implications for their respective roles. This can empower the workforce to effectively leverage AI tools and participate in the ongoing digital transformation.

What are the biggest challenges in upskilling your workforce for an AI-centric future?

The biggest challenge is the transition from a state of stability to a transformative phase. This journey involves navigating disruptions and identifying who profits along the way — and who might be left behind. AI literacy is a fundamental component of upskilling efforts, as employees need to grasp not only the technical aspects of AI but also its practical implications for their daily tasks and responsibilities.

Another challenge lies in addressing concerns about productivity and accuracy. As AI enables tasks to be completed more efficiently and accurately, employees fear losing their jobs. So, part of upskilling involves reassurance and education on how AI can enhance productivity and create new opportunities for growth and innovation.

What ethical considerations does AI introduce into your industry, and how are you tackling these concerns?

As we transition to increasingly automated processes, we need to build ethical considerations into every stage of AI development and deployment.

Before creating AI models, we must consider their purpose, potential outcomes, and potential to harm groups of people, even indirectly. Additionally, ongoing monitoring and calibration are essential to ensure accuracy, detect biases, and assess whether the AI is fulfilling its intended purpose securely.

Recent instances, such as copyright infringement issues arising from AI-generated content by large language models, highlight the importance of ethical inquiry. These ethical considerations extend beyond technological issues and into matters of social consensus as well. Building consensus on ethical standards is essential before the widespread adoption and sharing of AI technology.

What are your “Five Things You Need To Do, If AI Is Disrupting Your Industry”?

When AI disrupts your industry, you need to implement trustworthy AI practices at every stage — and it begins before the first line of code is written. Here are my five key steps:

  1. Establish Oversight and Governance: Set up robust oversight mechanisms early in development to ensure accountability and ethical governance.
  2. Develop Standard Operating Procedures: Create procedures to assess and monitor overall readiness and regulatory compliance for AI initiatives.
  3. Prioritize Compliance and Ethics: Ensure compliance with legal and ethical standards by implementing comprehensive frameworks and integrating ethical considerations into organizational culture.
  4. Invest in Technological Infrastructure: Secure adequate technological infrastructure, considering factors such as data security, scalability, and regulatory compliance.
  5. Implement Monitoring and Auditing: Establish mechanisms for monitoring and auditing AI systems to ensure compliance and identify issues. Regular audits help maintain reliability and integrity. Ideally, an AI platform will have trustworthy AI capabilities like model monitoring, explainability, transparency and bias detection built in.

What are the most common misconceptions about AI within your industry, and how do you address them?

One of the most common misconceptions is the belief that AI will inevitably lead to catastrophic outcomes, such as mass job loss or even existential threats like a “Terminator” scenario. To address this misconception, we focus on dispelling fear through education and awareness initiatives. By providing accurate information and practical examples of AI’s benefits, we try to shift perceptions from fear to understanding.

We also recognize the importance of building trust in AI technologies. Many individuals base their beliefs on the opinions of friends and peers, highlighting the significance of social influence in shaping perceptions of AI. Therefore, we prioritize transparent communication and engagement with stakeholders to build trust and confidence in AI initiatives.

Moreover, there’s a branding issue associated with the term “AI” itself. It can evoke images of autonomous robots or dystopian futures, which further contributes to misconceptions. To address this, we emphasize that AI is not a monolithic entity but rather a set of technologies that can be implemented to enhance various aspects of our lives and work.

Can you please give us your favorite “Life Lesson Quote”? Do you have a story about how that was relevant in your life?

One of my favorite life lesson quotes is by Goethe, who said, “I have come to the frightening conclusion that I am the decisive element. If we treat people as they are, we make them worse.” This quote resonates deeply with me because it emphasizes the notion that what and how we do things matter a great deal and have an impact. If we speak to the potential in others, we can help them become what they are capable of becoming.

Off-topic, but I’m curious. As someone steering the ship, what thoughts or concerns often keep you awake at night? How do those thoughts influence your daily decision-making process?

Responsible AI adoption and ethical practices is something I think about regularly. Balancing innovation with ethical considerations and staying ahead of technological advancements while mitigating risks — not just in this industry but for society as well — are key concerns I have. These factors influence my daily decision-making, driving me to prioritize transparency, collaboration, and responsible choices.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 🙂

If I could start a movement to bring the most good to the most people, it would focus on two areas. First, providing housing for those who are unhoused. It is fundamentally inhumane for individuals to be forced to live on the streets, and addressing homelessness would significantly improve the lives of countless people. Secondly, I’d like to evangelize the concept of critical thinking. With so much misinformation out there, it’s essential for individuals to develop the ability to think critically about the impact of technology on society.

How can our readers further follow you online?

You can keep up with me on Linkedin.

Thank you for the time you spent sharing these fantastic insights. We wish you only continued success in your great work!

About the Interviewer: Cynthia Corsetti is an esteemed executive coach with over two decades in corporate leadership and 11 years in executive coaching. Author of the upcoming book, “Dark Drivers,” she guides high-performing professionals and Fortune 500 firms to recognize and manage underlying influences affecting their leadership. Beyond individual coaching, Cynthia offers a 6-month executive transition program and partners with organizations to nurture the next wave of leadership excellence.