
Metrolinx is clarifying how customer queries are handled on its GO Transit social media accounts after a recent response—generated using artificial intelligence—drew some criticism.
On Tuesday, a GO Transit customer complained about the transit agency’s service on X following a Coldplay concert at Rogers Stadium and received a now-deleted AI-generated response.
The concertgoer said they had to leave the Coldplay show early and run to catch the last northbound GO train.
Read More
GO Transit replied, “Sounds like Ange had a dramatic dash to catch that last northbound GO train at 11:13 p.m. That’s cutting it close!” It then urged the user to check the website for train times, adding, “You go visit our website.”
Metrolinx, who operates GO Transit, told Now Toronto that the response was “inappropriately drafted” by a third-party vendor.
Though the tone may have intended to be light, the response prompted criticism for sounding dismissive—an issue made worse once it was revealed the tweet was generated by AI.
Metrolinx says that it has now prohibited the use of AI in any social media replies.
Metrolinx says its social media accounts are operated by employees, but the GO Transit account is supported by a contact centre vendor due to high volume.
“We recognize that we did not meet our customer support standards, and we have provided clear direction to our vendor that AI cannot be used under any circumstances.”
AI RESPONSE RECEIVES BACKLASH
Ajax MPP Rob Cerjanec says the incident speaks to deeper accountability issues within the public sector’s growing use of artificial intelligence.
Cerjanec screenshotted the deleted tweet and raised concerns about how AI in customer service lacks “empathy and understanding” of the issue.
“In this case, some of the AI-generated messages were even inaccurate. It highlights the need of having someone who is human and has knowledge of the issue to review and respond appropriately in a public format,” Cerjanec told Now Toronto in an email statement on Friday.
“When a member of the public is raising a concern to the government—or a Crown corporation in this case with Metrolinx—you need to employ empathy and understanding. The public wants someone to take responsibility, answer their questions, and fix the issue.”
For Dr. Anatoliy Gruzd, professor of information technology management and director of research at the Social Media Lab at Toronto Metropolitan University, the incident highlights a much broader issue.
“It just shows how careful organizations need to be when deciding to deploy such technology,” Gruzd told Now Toronto.
“It raises questions about how that decision was made within the organizational context, like, how an organization decides to do that… It could be that some social media platforms use a chatbot based on Gen AI, and some organizations can just enable it by a click of a button.”
Read More
Cerjanec said the situation also raises questions about who allowed such technology to be used in the first place.
“Who approved it? Who’s overseeing this? What kind of control does this government have over the Crown corporations it’s responsible for?”
In terms of restoring public trust, Cerjanec says Metrolinx needs to do two things: improve staffing and transparency in its contact centres, and ensure post-event train service aligns with public expectations.
“When public organizations tell customers that the issue has been flagged for decision makers, ensure there is a process in place to consider making changes that respond to public feedback,” he said.
“If the public raises issues and never sees anything resolved, that diminishes public trust.”
“And run trains when people need them after concerts at Rogers Stadium,” he added.
Gruzd, meanwhile, says a total ban on AI use in communications may not be a long-term solution, especially as generative AI becomes embedded in many tools.
“I think each organization has to develop a plan and decide what is appropriate in what context to apply such technology, if technology is not foolproof, well tested and vetted, it leads to examples like we are experiencing right now,” he said.
Gruzd pointed to the importance of transparency, especially when people may not realize they’re interacting with AI.
“If customers see it as a cost cutting measure, and they don’t necessarily find the value in responses that the particular chatbot gives, then that’s detrimental. That’s bad for the company image,” he explained.
Gruzd also cited a 2022 case where Air Canada was found liable after a chatbot misinformed a customer about bereavement fares.
“The court says the company is liable even though the chatbot made a mistake. So, there is a liability from that legal perspective, there’s also trust towards organizations that would be undermined if customers or potential customers learn that chatbot technology is being used to provide answers. It undermines credibility from that organization,” he said.
As more organizations experiment with AI, Gruzd said we’re currently witnessing the “first wave” of adoption of AI technology — followed by a coming wave of pushback as the public sees more cases of “misuse or poorly deployed cases like this.”
“With each incident, organizations will need to go back to the drawing board,” he said.