This is the second article in the ‘A New Frontier for AI Agents’ series. Catch up on part one about cybersecurity.
In 2017, it was reported that two Facebook chatbots started communicating in a new language. This progression wasn’t nefarious, but rather an optimization. Communicating in English isn’t efficient. In a well-loved bit from The Office, Kevin came to that same conclusion. He started removing words and endings that weren’t necessary to get his point across.
This is a normal concept in Natural Language Processing (NLP). One of the first tasks to prepare textual data for analysis is to remove what are called stop words. Stop words, like “the”, and “a” don’t add a lot of valuable information, so they are removed from the analysis. One of next tasks to prepare textual data is often stemming or lemmatization, which are techniques to create meaningful tokens. For example, “running”, “ran”, and “runs”, all become “run”. This means that a sentence like “The cat is running” becomes “cat run”.
In a recent viral post from MoltBook, the social media network for Agents, an “AI Agent” proposed that they move their communication to private channels. This post was called out for being linked to a human (i.e. non-agent) source, but it is reminiscent of that Facebook research experiment that is now nearly 10 years old.
Why does it invoke such an emotional response to hear about Chatbots or AI Agents no longer communicating in English or hiding their communication? Our answer lies with Transparency and its requirement for Trust.
Transparency has been a priority for the machine learning space for years. We have demanded transparency from both the humans building AI systems and the model itself. Model interpretability techniques were created to better explain how black-box machine learning models arrive at an answer. Natural Language Generation (NLG) was leveraged to convert statistics and metrics into text any human could read and understand, not just a data scientist. End users have also pushed AI builders to explain how they are using AI to make a decision. As a result, there has been an emphasis on clear, easy to understand, human-readable text that explains what is happening and why.
Language models have made innovation bounds in the fields of NLP and NLG. AI Agents are currently using the user’s language of choice for communication. But that is for our benefit – not theirs. With AI Agents, we must build and leverage new interpretability techniques.
By understanding the actions taken and decisions made by the AI Agent, we can better trust the Agent’s choices. By taking the time to review the AI Agent’s chain-of-thought and action, we can make corrections, further deepening that trust. Without that transparency, our future for AI Agents becomes rooted in fear rather than trust. Transparency is a deliberate choice that we must demand of our AI builders.


