top of page
Search

Could your choice of AI provider become your next brand reputation risk?

  • 2 hours ago
  • 4 min read

The pace and extent that AI has been embedded into our daily lives is difficult to underestimate. A new study by the British Chamber of Commerce, drawing on analysis by the University of Essex, shows that 54% of SMEs are now using AI tools, more than double the 25% reported in 2024.  

 

For most business leaders, the conversation has focused on how much time and money the technologies can save but, another, potentially more consequential question is beginning to be asked… Does it matter who owns the tool and how it's used? 

 

The conflict that’s changing the conversation 

With the world’s largest AI companies, OpenAI (owners of ChatGPT), Anthropic (owners of Claud), and Palantir, all involved with the contentious conflict in Iran, it has prompted the question of whether it could reflect negatively on their brand reputation and what knock-on effect that could have on investors and consumers.  

 

Business leaders and marketing teams alike are facing a new challenge of AI ethics and where this sits within the organisation.  

 

The recent fallout between Anthropic, owners of Claude, and the US Government over its refusal to let Claude be used for autonomous weapons or mass surveillance of citizens last month saw the Pentagon cancel its $200 million contract and label them a ‘supply chain risk’. On the same day, OpenAI, the owner of ChatGPT secured its own Pentagon deal.  

 

While Anthropicʼs stance might have cost it a significant government contract, it accelerated its growth. That’s despite the technology still being used in the conflict as a decision-making tool for US forces. It was used to identify more than 1,000 targets in the first day of the war alone. 

 

Consumers backed Anthropic 

Downloads of Claude spiked 70% overnight after news broke of Anthropic losing its Pentagon contract. Over 2.5 million users boycotted ChatGPT. Uninstalls of ChatGPT spiked 295%, and business adaptation of Claude surged. Claude even overtook ChatGPT on the US App Store.  So, are we entering an era of ‘AI-washing’? One where the tech tools a company chooses to use, how ethical that provider is perceived, and how the company uses those tools are subject to the same scrutiny as sustainability claims? 

 

Will AI-ethics become the next corporate greenwashing?  

Just as environmental sustainability evolved from a niche concern into a mainstream indicator of corporate responsibility that shaped investor confidence and consumer trust, is AI ethics could be poised to follow the same path? 

 

It’s not all that long ago that environmental credibility was a nice-to-have, maybe just making a page in the annual report.  

 

Today, a company's carbon footprint, supply chain ethics, and water stewardship are subject to investor scrutiny, regulatory obligation, and genuine consumer choice. AI ethics is on the same trajectory, and it's moving faster. 

 

It’s not just about whether the AI providers are palatable to increasingly conscious stakeholders. The benefits of the technology are offset by ongoing environmental concerns. Training and running large AI models consume enormous amounts of energy and water, with data centres placing significant strain on local water supplies in regions already facing scarcity.  

 

For businesses that have spent years championing carbon reduction and water stewardship, ignoring the footprint of their own AI infrastructure would be a major contradiction. 

 

The investor dimension 

While the use of AI and all the efficiency benefits that it brings is hugely attractive to investors, just as environmental sustainability evolved from a niche concern into a mainstream indicator of corporate responsibility that shaped investor confidence and consumer trust, could AI ethics be poised to follow the same path? 

 

Although purely ethically-minded investment has been replaced in recent years with a more pragmatic approach, ESG investing has remained steady with the global ESG investing market was valued at around $39 trillion in 2025 and is projected to grow to $45 trillion in 2026, with forecasts suggesting it could reach $180 trillion by 2034, at a growth rate of nearly 19% per year. Fortune Business Insights

 

Could greater governance hold the key?  

Anthropic is setting itself apart from its competitor by supporting calls for greater guardrails around the use of AI and is positioning itself as the more responsible, more transparent provider – and it’s working. 

 

We are starting to see businesses follow suit. Responsible AI governance, which covers data privacy, algorithmic accountability, and responsible automation, is helping organisations not only with the selection of AI vendors but with the use of the platforms and managing their associated risks. 

  

For PR and Marketing professionals, a great reference point is the Venice Pledge - a commitment to the Global Alliance for Public Relations and Communication Management’s Responsible AI Guiding Principles. These Principles provide member organizations and the global public relations and communication profession with a shared framework for the responsible use of artificial intelligence.  

 

Where does this leave us? 

Many businesses are still at the stage of exploring which AI tools they should be implementing. The questions of brand reputation, governance, or longer-term implications are on the periphery.  

 

Yet, what is becoming increasingly clear and, as the Pentagon is finding out, once embedded into workflows, these tools become difficult to roll back from. That means taking time to agree on governance, and your brand’s red lines need to be figured out now. 

 

 

A quick thank you. This blog has evolved out of a recent LinkedIn post I shared, asking for people’s views on the brand risk of AI. Thank you to everyone who took the time to share their thoughts: Suzi Steele, Emma Gordon, Jill Simpson, Ross Nicholson. 

 
 
 

Comments


bottom of page