Algorithms and automation are the core of what makes many fintech innovators unique in the marketplace.
They support a wide range of key processes in establishing a financial service, including account opening and user onboarding, underwriting, trading, and more. The sophistication and uniqueness of specific algorithms can even become the main differentiator between a fintech and its competitors, and the most compelling value proposition for venture capital investors seeking high-potential intellectual property for their investments.
The power of algorithms in both service and value creation is undisputed, but there is another aspect of algorithms, namely, the compliance dimension. Multiple regulatory bodies including the SEC, FINRA, the CFPB, and the FTC have focused on algorithms from within the scope of their requirements and restrictions.
- Per SEC regulatory guidance, the SEC monitors robo advisers to evaluate how these they meet obligations under the Investment Advisers Act of 1940. As with any registered investment adviser, robos must meet the substantive and fiduciary obligations of the Advisers Act. These include the presentation of disclosures to clients about a) the functioning of the robo and its advisory services as well as any circumstances where its trading decisions are overridden, b) the obligation to obtain necessary information from clients in order to meet its duty to supply advice in those clients’ best interest, including through intake questionnaires and other partly or fully automated account creation tools; and finally, c) “the adoption and implementation of effective compliance programs reasonably designed to address particular concerns relevant to providing automated advice,” including design, implementation, and annual review of written policies and procedures that are reasonably designed to prevent violations of the Advisers Act, as well as ongoing review of the algorithmic code and its behaviors.
- FINRA has provided guidance on effective supervision and control practices for broker dealers that use algorithmic strategies. Their practices are focused on five general areas: General Risk Assessment and Response; Software/Code Development and Implementation; Software Testing and System Validation; Trading Systems; and Compliance. While each of these is a complex area of its own, with respect to compliance, FINRA’s core principle is to ensure “that there is effective communication between compliance staff and the staff responsible for algorithmic strategy development is a key element of effective policies and procedures.” In other words, algorithms can not be a “black box.” They must be accounted for in the overall compliance strategy and be subject to day-to-day operational monitoring and controls.
- The CPFB, with its focus on credit access, credit pricing, and borrowing costs, has expressed concerns about the use of automated, machine learning, or algorithm-based tools to evaluate creditworthiness in alternative lending. The Bureau is scrutinizing the use of alternative data and machine learning in these scenarios to ensure compliance with the Equal Credit Opportunity Act (ECOA). While it encourages lenders to develop innovative means of granting access to credit, particularly for people whose credit access has been historically limited, it also expects lenders to maintain “a compliance management program that appropriately identifies and addresses risks of legal violations,” in ways that are appropriate to the data and scoring models being used.
- Finally, the FTC has demonstrated similar concerns about the fairness and transparency of the use of algorithms in both fintech and in other consumer scenarios. In a statement published on April 8, 2020, its “law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability.” While the statement is not a regulation per se, it sheds light on the areas the FTC scrutinizes. Because algorithms make decisions that affect the end-user, they potentially violate laws such as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA) or cause consumer injury with unfair and deceptive practices as a result of algorithmically-determined decisions. With all of their power, algorithms might intentionally or inadvertently include biases that violate FTC principles of fairness.
To mitigate the risk of investigations, enforcement, and adverse outcomes, fintech innovators using algorithms should consider their mechanisms for compliance and accountability from the get-go. Four key points should be considered:
- A review of the algorithm should take place when developing a compliance strategy while taking all major regulations under advisement. Product strategy and compliance strategy should work hand-in-hand. The ways in which an algorithm intakes data should be clearly understood. The business rules and inferences it uses to make decisions from investing to trading to lending must be carefully vetted to determine potential sources of risk.
- Appropriate steps need to be built into business processes and product designs to mitigate the various consumer protection risks. The regulators are very clear that consumers must have an accurate and transparent explanation of how data about them is being collected or where it is being sourced. If an algorithm makes a decision such as denying an account opening or limiting trading ability based on algorithmic decisions, companies must disclose the principal reasons for such decisions to consumers. The ability to comply with these consumer requirements should not come after the fact. Products and operational processes must support them by design.
- During ongoing operations, robust mechanisms for monitoring and auditing algorithmic decisions must be in place. Decisions should be part of day-to-day compliance review activities. Compliance staff need the ability to monitor the flow of algorithmic decisions and see any red flags if unexpected patterns emerge. This becomes especially important as algorithms evolve from a strictly rule-based approach to true machine learning. In these scenarios, algorithms can be tested manually by picking a sample of accounts or transactions for review, in addition to periodic review of mock transactions to verify that they generate the expected outcome (either success or rejection). Finally, when transaction volumes are quite large, it becomes more effective to automate this with testing code that tests at larger scale.
- In the case of regulatory inquiries, companies must be prepared to demonstrate how their algorithms have acted, to prove that they have done so without unfairness to the consumer, and to show that they have adequately implemented compliance and accountability into the algorithmic portions of their business.
These compliance aspects from strategy to day-to-day operations can be tricky to navigate. The regulatory principles spanning four separate regulatory bodies do have some ambiguity and may be subject to interpretation. Fintechs should engage expert support from end-to-end, including the support of a compliance expert such as InnReg. Our experience with fintechs and the unique risks of business process automation in finance are an essential piece of the puzzle in mitigating risk.
If you have questions about the compliance risks of AI and algorithms for your firm, please don’t hesitate to be in touch.