Machine learning is a widely used tool that assists in decision-making and the automation of processes in commercial sectors and is propelling the financial services industry. Companies are turning to ML use cases in finance for more security, a slicker user experience, faster support and nearly instant gapless processing. It enables financial institutions to transform the endless stream of data they continuously generate into actionable insights for everyone – from C-suite and operations to marketing and business development.
However, ML technologies have certain drawbacks like unpredictable behavior on an unconditional dataset format, exposure to security risks and cyber-attacks. And there are challenges in implementing AI models across several geographic areas due to time and cost constraints and lack of quality in training data. These constraints can lead to biases that result in skewed outcomes, low accuracy levels, and analytical errors causing escalations in a critical sector like banking. Models not trained for all possible outcomes can lead to adversarial attacks.
Implications of faulty ML models
In the financial sector, a trader fraud detection model can flag activity as fraudulent based on the individual behavior of the traders. If the model makes the wrong predictions and does not correctly assess fraudulent activity, the implications can be far-reaching. Consider the impact of a cyber-attack on Travelex. In January of 2008, Travelex, a global currency exchange provider, saw all systems taken offline due to a ransomware attack by REvil. As a result of the hack, manual transactions inconvenienced millions of holiday travelers, and the company’s stock dropped 20%.
Given the sensitive information banks deal with, leveraging the discipline of adversarial machine learning is necessary to determine the weaknesses of ML models/algorithms and develop ways to resist manipulation from outside sources. The most effective strategy deals with many possible outcomes, maintains the security implications of data access, checks the weakness of ML models/algorithms during quality checks and testing and then alerts the enterprise of a possible attack on sensitive information.
How to address adversarial attacks?
Using adversarial examples as inputs to the ML models is an innovative strategy to improve model performance. These intentional inputs cause the model to make a mistake. From these mistakes, the model learns again and accelerates its performance. These strategies make the model more robust on an unbounded dataset, diminish operational costs and help the staff save significant time on document search tasks and analysis. It efficiently distinguishes a fraudulent activity from a non-fraudulent one making models more resilient and secure.
Benefits of using adversarial attacks in modeling
Adversarial attacks boost data security by preventing hacking. Financial fraud costs customers and businesses billions every year. In 2020 alone, businesses lost a record $56 billion. Companies must ensure they know how to prevent fraud. Automation has decreased operational costs since an adversarial attack on AI models makes them reliable and resilient at predicting unforeseen circumstances. It also creates situations that help check the model’s performance and enables the client to deploy the existing model in different geographic regions with minor modifications.
In banking and insurance, if models make decisions like approving loans/credits, deciding insurance premiums, etc., poor modeling can lead to a significant loss of revenue, incorrect predictions, or biases. Adversarial attack solutions can prevent these risks by helping identify the gaps in the expected outcome and performance of the AI model. Proactively fixing these issues boosts the client’s trust in the model and enhances customer service.
Challenges in implementing AI projects for banking are prevalent, and the advantages of opting for these solutions are much higher than the risk. Adopting the approach of the adversarial attack helps identify the model’s weakness and makes it more robust and less biased. Addressing the model’s performance will yield higher ROI for the AI investment as it lowers the risk. According to McKinsey, the benefits of improving model performance add up so much that the annual value of AI and analytics for global banking could reach $1 trillion by 2030.
Priya Rani
Technical lead, AI Solutions, Wipro Limited
Priya has 12+ years of experience in IT across AI/ML, data analytics, data science, IoT and cloud technologies. She has extensive expertise in ML algorithms, data mining, deep learning/computer vision and predictive analytics. While working with CDAC, she contributed to various AI projects in collaboration with the Ministry of IT. Priya has mentored 150+ aspiring candidates in AI-ML and data science. She has published papers in International Journal and IEEE International Conference.
Avil Saunshi
Lead Consultant, AI Solutions, Wipro Limited
Avil has 12+ years of experience in IT across data science, image processing, deep learning, ML and cloud technologies. He has worked as a researcher in TCS Innovation labs, filed patents and published papers in International Journal and IEEE International Conference. Avil has experience in Banking, Retail, Agriculture and Automotive. Currently, he is working with banking clients and leading the data science team.
Subhankar Roy
Practice Partner, AI Solutions, Wipro Limited
Subhankar has 21+ years of experience in IT. His skills span analytics, big data, BI, data science, AI/ML, cloud technologies, ML algorithms, Descriptive, Predictive and Prescriptive Analytics and delivering end-to-end AI projects. Subhankar was a Senior Data Scientist in the past, contributing to the AI growth for key Wipro clients. He has mentored 50+ aspiring candidates in AI/ML and data science and was responsible for setting up the first ACE Certification Track for AI-ML in Wipro.