Chatbots: A New Tool To Breach Data Security
It’s been over a decade now with the hype around artificial intelligence and its possibilities and one of its tangible manifestations are chatbots. Top companies and brands like Starbucks are using Chatbots and it is estimated that more than 80% of the companies will be using Chatbots by 2020.
With advanced technology, it is now easy to make chatbot. Companies like Dexter, for example, uses an interface like WordPress and anyone can create a chatbot in few minutes which can be published on Facebook and Slack. They are becoming a useful tool for customer interaction and business support. The artificial intelligence technology helps the users and customers to ask questions, resolve issues, pay bills doing the work of many humans which makes them far cheaper than call center personnel.
However, there is a downside too. As customers interact with chatbots they give out information which is valuable to the company becoming a target for phishing, hacking, and general mischief.
What are the Risks?
There are recently many incidents relating to security breach where the hackers have abused the chatbot to steal personal information like card details used during payments or offensive messages which might threaten the reputation. The worst possibility is that there is a potential for attackers to find inroads in those chatbots by exploiting loopholes in the codes and to be at a man-in-the-middle position so as to steal data when the interaction is taking place. It can also be exploited by sending users exploited links to get an access to backend data. Attackers can also interact with customers directly by impersonating the messages of the business organization and steal the customer’s personal information easily. These risks and different ways of violating the security which is hidden in such a manner that it makes it difficult to mitigate them.
Many of the analysts believe that as the companies evolve and integrate their business platforms with chatbots the attackers will leverage them to their wrong campaigns to various other businesses and individuals and over time as the security enhances the methods used by these attackers will also evolve.
Some of the Chatbot attacks that were made public:
Many of the organizations don’t take strict actions against such attacks as they are unaware of the laws relating to cyber security due to which many of them go unreported making matters more complex. However, there are few incidents where such attacks are made public which provide insights which are valuable.
Some incidents which included Microsoft and Tinder were reported. Like in Microsoft’s case the chatbot Tay was exploited by attackers and was used to send anti-Semitic and racist abuse which is classified as “pollution in communication channels.” Also in Tinder, the chatbot was exploited by cyber criminals to carry out fraudulent activity by impersonating someone else and asked the victims for their personal payment details. In June, a cybercriminal group called Magecart attacked Inbenta’s Javascript which was built for Ticketmaster UK’s chatbot. The script was used by the AI to collect information the Ticketmaster chatbot and was disabled immediately after the breach.
Most of these attacks are on the software and their vulnerabilities can be mitigated by using tried and tested security forces like multi-factor authentication for verifying user before collecting any personal data through the chatbot.
Author: Yogesh Prasad
Information Security Professional | Cyber Security Expert | Ethical Hacker | Founder – Hackers Interview