The Algorithmic Accountability Act has been introduced in the USA to specifically provide a legal reference framework to cover issues such as algorithmic biases that may affect delivery of services to citizens and consumers. The act seeks to route its regulatory specifics through the Federal Trade Commission (FTC) and empowers it to issue new regulations as necessary. The bill is currently a draft bill and available for download and public debate/feedback.
As AI led solutions become central to policymaking and business strategy, introduction of such legislation is timely. Elsewhere in Europe, GDPR has various provisions dealing with this subject. Hong Kong (HKMA) have announced guidelines that are currently "voluntary" in nature.Organizations such as the UN and OECD have initiated forums and debates on this topic. It is an idea whose time has arrived.
If passed in the current form, The Algorithmic Accountability Act of 20191 will mandate that all covered entities who deploy automated decision systemsthat affect a “consumer” will have to mandatorily conduct automated decision system impact assessments and data protection impact assessments. These assessments cover evaluationof algorithms in terms of their accuracy, fairness, bias, discrimination, privacy and security, use of personal data, security of information systems and stores.
Let us now understand some of the most important definitions as per the Act.
- “COVERED ENTITY”—The term means any person,corporation with more than $50 million in revenue orhas data on more than 1 million consumers or 1 million consumer devices. Also applies to firms who are data brokers
- “AUTOMATED DECISION SYSTEM” (ADS)—The term means a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.
- “AUTOMATED DECISION SYSTEM IMPACT ASSESSMENT” —The term means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security. Further, companies would be required to correct any issues discovered during impact assessments and evaluate how their systems protect consumer privacy.
As you can imagine, these legal developments can be seen as restoring the balance between the digital AI-powered corporations and the data subjects - the citizens. They throw up a variety of challenges and opportunities for AI adoption. Let us discuss the same.
Challenges
- Corporations now have to move from voluntarily adopting ethical AI methods to compulsorily and comprehensively meeting requirements. This will require significantly more cost, time and effort to roll out AI solutions
- Under the act, detailed description of the ADS, its design, its training approach, data, and its purpose has to be put out in public domain and easily understood by citizens. This is easier said than done.
- Pureplay data brokers/exchanges now come under greater scrutiny. Compliance costs go up. Buyers of such data will have to share costs and liabilities.
- Traditionally, AI algorithms benefited hugely by using variables that closely mimicked human biases and preferences. E.g. “personalized” info such as demographics, SM chatter of a loan applicant have fairly useful predictive power. Similarly, the bias of a loan approval officer towards certain stereotypes usually constitutes the “historical labelled data” of whether you will receive a loan approval or not. The “data minimization” principles now mean all such variables cannot be used and may result in decreased ADS model accuracy and hence ROI.
- AI solution providers, vendors, cloud platform providers will have to seriously think about liability, reputation risk, penalties while entering into contracts.
- AI solution buyers have to revisit their engagement model as well. Division of roles, accountability, model ownership within the firm will become important. T&Cs of AI powered public services agreement will have to be revisited.
- The recent explosion of citizen data scientists and augmented analytics solutions will come under question. These approaches will need increased governance and risk mitigation measures. That will increase costs, slows down rollouts.
- The Act defines a high risk ADS as one that “systematically monitors a large, publicly accessible physical place”. So a ADS solution using CCTV footage of say a bus station, for law enforcement, could potentially be challenged.
Opportunities
- The very fact that legal recourse was not available to “aggrieved” citizens would have inhibited AI adoption. Such acts once enacted could be reassuring and allow for new innovative ADS solutions with active public participation
- Many corporations who were merely “wetting their feet” on AI in the absence of regulatory systems would be more comfortable taking the plunge into AI. Especially in the government sector. Policymakers now have a reference point.
- AI insurance could emerge as a new revenue stream for insurance companies
- “ADS auditor” is likely to be the next hottest job title. Act clearly lays out that corporations will have to do a complete technical and risk assessment of all AI solutions before rolling out. This will need a set of independent ADS auditors (both internal and external)
- Same for ADS forensics as and when a claim is made against the corporation or government.
All in all, these are very exciting times for widespread AI Adoption. The legal developments could be viewed as a speed breaker or a slingshot. The jury is out there.
Reference
- Full text of the bill https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202019%20Bill%20Text.pdf