Major Companies Using AI To Monitor Employee Discourse, Raising Ethics Concerns

Massive corporations such as T-Mobile and Walmart are allegedly using new artificial intelligence software to go through their employees’ messages to monitor dissidents.

The AI program comes from a startup named Aware and works to monitor discussions on apps such as Slack, Microsoft Teams and many others. The program looks for keywords that may represent unhappiness with the company or potential liabilities.

While the data is anonymous for employers, not naming who specifically said a particular message, Aware pushes that it allows clients to better understand how their employees of certain groups are responding to different situations.

Co-founder and CEO Jeff Schumann said the program will help companies “under the risk within their communications” at a quicker pace than ever before, no longer relying on surveys.

While the heads of the companies are thrilled to have new software that can better monitor employees, the rank-and-file members are far less enthusiastic about the program.

“I would feel like, I don’t know, like they’re just trying to get something out of me and get me in trouble or something. I don’t know, it would be very sneaky,” one person commented.

“I’ve seen A.I. being used firsthand, and it’s so flawed and so messed up that I just think it wouldn’t be a useful investment of anyone’s time or money anyways. And that just doesn’t really foster a trustworthy kind of business vibe,” another individual said.

Co-founder of Humane Intelligence, an AI accountability nonprofit, Jutta Williams warned of how “a lot of this becomes thought crime” as it treats “people like inventory in a way I’ve not seen.”

Not everyone is against the new program though.

Of course, Schumann is a stout defender of it, pointing to the positives of his A.I. program. He states that on top of helping employers better understand their employees’ needs and concerns, it can also help prevent “extreme violence, extreme bullying [and] harassment.”

Others have little concern about being monitored as they believe they have nothing to hide.

“I think I’m fine with it because I’m very watchful of what I do on company time, company property, anything like that,” a man commented.

Schumann also attempted to quell concerns about AI reaching decisions despite the many flaws it may still have, assuring that “None of our AI models make decisions or recommendations regarding employee discipline.”

“When the model flags an interaction,” Schumann explained, “it provides full context around what happened and what policy it triggered, giving investigation teams the information they need to decide next steps consistent with company policies and the law.”

Aware has seen tremendous growth over the years as more companies flock to the monitoring software, averaging a 150% increase in revenue per year for the last five years. Back in 2023, they managed to raise $60 million in funding for the company.

While the program is far from being the staple that other artificial intelligence like ChatGPT and VertexAI, the quick rise certainly represents a market want for programs that can help monitor employee discourse.

However, the question of it’s ethics remains