We know. It’s been a crazy few years in AI. As this very newsletter attests, it’s been hard enough to keep track of the frenetic AI startup fundraising activity in 2025 . But it’s not just startups.
This was the year staff worldwide, in companies of all sizes, were given access to corporate LLM accounts with very little guidance on how to use them. At the same time, the risks of unchecked usage of generative and agentic AI became hard to ignore.
Beyond company-level considerations, 2025 also marked a fundamental shift in how AI operates in the world. The technology moved from being a helpful assistant to an autonomous agent that acts on your behalf: booking meetings, writing code, managing databases, even making purchasing decisions.
Meanwhile, the competitive pressure to adopt AI became almost unbearable, with some companies even citing fear of missing out as their number one AI-related risk (way greater than their fear of getting it wrong). With the stakes rising, governments pushed to loosen rules, so as not to lose the economic development battle to other regions.
The result was a perfect storm of powerful technology, widespread deployment, and governance that seemed to heighten risks.
Did you hear, for example, about the Sky News deepfake interview with Imran Khan’s sister, which nearly started a war after going viral, and being shared by at least one minister? More recently, French President Emmanuel Macron said Meta was “mocking us” when it refused to take down a deepfake video about a coup d’état that was seen 13 million times.
Or what about the researchers who established that “adversarial poetry” could be used to hack LLMs? In other words, if you submit to ChatGPT a poem that only figuratively refers to how to make a bomb, it will give you a step-by-step recipe for how to do exactly that?
Let’s not forget the developer who got caught pasting hundreds of customer records into ChatGPT, because he “just needed help with an SQL query”. Somewhat reminiscent of why Samsung outright banned OpenAI’s flagship product in 2023, after an employee leaked confidential company code to it…
…all this while the Model Context Protocol (MCP), introduced by Anthropic late 2024 and quickly adopted by OpenAI and Google, paved the way for LLMs and agentic systems to speak the same language, despite clear security concerns.
Copyright issues also spiralled in 2025. In fashion, it emerged that “Frankensteining” is now common practice: models are hired for one shoot, scanned, and then reproduced in an infinite number of campaigns, with no image rights paid. Something similar has been happening to Hollywood actors for a while now. And in music, a Deezer study revealed that 97% of listeners can’t tell the difference between AI-generated music and the regular kind. Warner Music Group (WMG) recently settled a lawsuit and agreed to license some artists' voices to let users create AI-generated music on Suno. Why bother listening to those artists' records when you can make your own using their voices? And will those royalties be even less than Spotify?
Then there are the energy and environmental impacts: the data centers necessary for all this resource-hungry AI activity are popping up so fast, some US states are already running out of electricity to power them all. Cue soaring electricity bills for consumers, as operators rush to upgrade their grids; and $162bn in data center investment is currently stopped or on hold in the US, due to local community protests, according to Data Center Watch’s recent reports.
Last but unfortunately not least, OpenAI blamed a teen for killing himself with the help of ChatGPT. Shortly afterwards, Sam Altman announced a future venture into erotic chats on the platform. An announcement he later regretted, but it was too late.
The answer to all of this has been to back away from regulation. In the EU, leaders are pushing to delay the EU AI Act out of fear it will hurt the competitiveness of local companies. In the U.S., the Trump administration has issued an executive order designed to block states from enacting tough AI rules, though some states seem determined to challenge that.
But the risks remain even if the regulations are not addressing them. So, how can you and your company protect yourself from AI’s “elements of irrationality,” as Google’s Sundar Pichai calls them? Especially in a context where regulation like the AI Act or GDPR seem increasingly likely to be watered down, potentially leading to less and less accountability? Not to mention the much-anticipated bursting of the AI bubble…?
The answer is clear: in 2026, organizations with no form of AI governance will leave themselves open to many and more of the above risks.
This governance could take the form of a “Responsible AI Charter”, which spells out staff members’ rules of engagement with LLMs and related tech. This is currently something 59% of US companies currently do not have, according to a recent KPMG report; despite nearly half of employees admitting they use shadow AI.
Imposing one official LLM for your organisation is an essential first step for such a charter, notably because it makes it clear that shadow AI usage - when employees use their personal chatbot accounts at work - is not tolerated, for confidentiality reasons (so no one uploads your codebase to ChatGPT…).
Next, make sure said charter or rules include an obligation to check all content produced by LLMs before sharing, especially outside your company (with clients or partners, for example). Again, according to KPMG’s report, 58% of employees currently trust AI without verifying. Another major risk for your organisation.
Then, ensure that all staff are trained to be more aware of the above risks. Spotting deepfake videos isn’t that difficult, just look out for blurred or wonky lip movements, for example - and ‘em’ dashes (“—”) are a dead giveaway for LLM-written texts. Such training should also strongly insist that no confidential or sensitive information, especially that of clients, ever be inputted to an LLM (hello again, Samsung!) And, if relevant for your business, staff should also be aware of AI’s often far-reaching copyright implications.
Setting up a charter is just the first step, of course: a specialist committee should be on hand to answer any questions staff may have, and to shape the responsible AI policy moving forwards.
Finally, governance should also include key notions of both sustainability and sovereignty. If you choose a model that’s hosted by a European company in an EU data centre, then European laws (such as GDPR) should exclusively apply. Conversely, specific US legislation can give North American law enforcement access to data hosted by US companies, such as cloud providers, even if that data is hosted in Europe. Not to mention the potential for a mad leader to cut transatlantic internet cables.
Choosing open source models not only means you can host them wherever you like, but also that you can keep a closer eye on their impacts. Whereas closed models like Gemini are a black box when it comes to energy and emissions measurement, open source ones are, as their name suggests, transparent with this type of data. Furthermore, the smaller a model - i.e. the less parameters it has - the less resources it consumes.
So why not encourage your CTO to choose a small-but-capable, EU-hosted model, like Mistral’s new Ministral 3 series? At a time when data centres’ electricity and water consumption are soaring (3-4x before the end of the decade), such choices could keep your company’s IT emissions from going off the scale…
In short, in 2026, we should expect Responsible AI Charters to shift from a minority to common practice in most companies.
Indeed, in France, they can already be found, both in big companies like Groupe La Poste (226,000 staff) and in French Tech 120 startups like Shippeo (200 staff).
Shouldn’t your company be next?
About the author
James Martin is the founder of BetterTech, a blog focused on making technology more responsible. He also conceived and presents “Frugal AI”, a GreenIT.fr training course focused on the impact of AI, and how to reduce it. Martin also helps a variety of companies and associations to use technology more responsibly. He was previously Head of Content and Sustainability at Scaleway (iliad Group), and led communications & CSR at Shippeo, where he put in place the company’s Responsible AI Charter.