Artificial intelligence has become the defining force of this moment in technology. Depending on your perspective, it can feel like the greatest ally or the sharpest threat. The truth is, it’s both. AI is already transforming how we work, compete, and serve customers. At the same time, it’s introducing fast-moving risks that are complex, heavily regulated (or soon to be), and more challenging to control than the systems we’ve dealt with in the past, due to its growth and ability to connect to enterprise data.
A recent industry survey found that nearly 60% of asset managers now cite AI as their top compliance concern. I’d argue the true number is even higher, and organizations not concerned with, or actively assessing, AI risks are falling behind. Whether you’re in asset management or any other industry, the reality is the same: AI is a top-of-mind risk that organizations can’t afford to ignore.
That tension between opportunity and risk has left many leaders asking how to innovate without losing their customers’ trust or violating regulatory expectations. I’ll dive into how you can do that below, but if you’re short on time, jump down to my rapid-fire Q&A with Ridgeline VP of Security and Compliance Bryan Faxel at the bottom of this article for some quick insights.
The Dual Nature of AI
AI offers the investment management industry unprecedented capabilities by automating routine and tedious tasks, allowing firms to focus on relationships and areas that require human judgment.
On the other hand, it introduces new vulnerabilities that must be taken seriously. A careless prompt in a generative AI tool, for example, can inadvertently expose sensitive data to systems outside of your control. The “AI black box” problem is equally pressing: if an AI system cannot explain why it made a particular decision, regulators are unlikely to accept “the model said so” as a sufficient answer.
Regulators have already signaled how AI will be scrutinized in audits and enforcement actions. The SEC has shifted from issuing guidance to active enforcement, particularly around predictive analytics and AI misrepresentation. In Europe, the EU AI Act has created sweeping new obligations for financial firms using AI and stands out today as the leading international standard. Global frameworks like NIST’s AI Risk Management Framework and ISO/IEC 42001 are raising the standard for governance and transparency. It's not enough to be thinking about or considering these risks today. You need to be taking action on them, because the industry and regulators are moving from policy to proof.
The Shifting Landscape for Asset Managers
For decades, asset managers have relied on speed, data, and judgment to outperform the market. But the rules are changing. Fee pressures continue to rise, client expectations are growing, and AI is reshaping the way investment management functions are performed.
The changing environment presents both opportunity and risk. A firm that rushes to embrace AI without adequate controls may achieve short-term speed but risks long-term erosion of trust. At the same time, a cautious firm that avoids AI altogether will find itself outpaced by more innovative competitors. The challenge is to strike a balance: embedding responsible AI into the operating model so that efficiency gains are realized without undermining credibility.
The firms that succeed will be those that view compliance and innovation as complementary rather than competing priorities. They will understand market expectations for explainability, implement technical security guardrails to prevent AI from operating outside their intended scope, and thoughtfully manage where sensitive data is stored and accessed. By building AI capabilities on a foundation of governance, transparency, and resilience, asset managers can win new advantages while reassuring clients and regulators that their growth is sustainable and trustworthy.
Why Trust and Innovation Must Advance Together
Security and compliance programs must evolve as quickly as the risks they are designed to manage. Traditional approaches (periodic audits, static controls, review gates, and reactive oversight) simply can’t keep pace with the reality of AI-driven change. Firms today require a deeper level of diligence and continuous governance to truly understand how their partners are using, protecting, and storing sensitive information. Frankly, the vendor oversight model must penetrate beyond check-the-box exercises and into clear, measurable, and transparent accountability.
Trust must be woven into every layer of the technology stack, and that begins with governance. Clear policies for evaluating, adopting, and monitoring AI create the foundation for responsible use. Effective governance ensures AI systems are implemented in accordance with policies designed to protect sensitive information and aligned with leading governance frameworks.
Transparency is equally essential. Firms should demand explainability from their vendors regarding their implementation of AI and require third-party technologies to provide clear audit trails that enable employees, regulators, and clients to understand how AI-driven recommendations were made. Without this visibility, confidence in outcomes quickly erodes. Transparency not only reduces risk but also builds confidence that innovation is being pursued responsibly.
Finally, resilience must be built in from the outset. We know that regulations will continue to change, threats will evolve, and unexpected events will occur. A resilient system can adapt without disruption, ensuring continuity while maintaining compliance. This flexibility is what separates organizations that thrive in periods of rapid technological change from those that struggle to keep up.
Building the Future Responsibly with Ridgeline
Firms need to embrace AI boldly but responsibly. It’s tempting to go chase the shiny objects, but it’s critical to uphold these standards of trust, security, and accountability.
That’s why we built a governance framework grounded in trust and transparency — purpose-built for the regulatory and threat environment we face today. Too many technology solutions treat compliance as an afterthought, something to bolt on once the system is already in production. We went in the opposite direction. From the start, we asked: what would a platform look like if compliance, security, and responsible AI weren’t just features, but the foundation? Our framework rests on three principles:
- Respect for Data – Enterprise-grade security to safeguard customer data, with firm protections to prevent training of models on sensitive information.
- Integrity – Every feature undergoes rigorous testing for security, privacy, legal compliance, and accuracy before release. Human-in-the-loop remains a core safeguard for critical actions.
- Transparency – AI features are off by default. Customers control what’s enabled, and data sources used are made visible, so customers can move at their own pace.
These principles aren’t optional. Without them, AI cannot meet the expectations of clients or regulators. Beyond ensuring compliance, they’re also about enabling customers to grow with confidence. Firms that demonstrate control and responsibility are more likely to win in the long run.
With Ridgeline, asset managers can adopt new workflows, explore AI-driven insights, and scale faster, knowing that their foundation is solid. Innovation without trust is a liability. But innovation anchored in trust — reinforced by unified data, immutable audit trails, human oversight, and modern-day vendor oversight — is where the real opportunity lies. That’s the future Ridgeline is committed to building, alongside the firms that choose to partner with us on this journey.
Rapid-Fire FAQs on AI and Compliance with Bryan Faxel
I’ve shared my perspective on why trust and compliance must anchor AI adoption. But I know many of you are wrestling with practical questions about what this means in day-to-day operations. To close, I’ve asked my colleague Bryan Faxel, VP of Security and Compliance at Ridgeline, to provide some straight-to-the-point answers to some of the common questions asset managers ask.
Q1: What does “compliance-first” really mean when adopting AI in asset management?
Bryan: Compliance-first means embedding regulatory requirements and operational controls into systems from the outset, rather than layering them on later. For asset managers, this means having a clear, responsible AI program that includes guidance around feature reviews (whether developed internally or leveraging third-party software), human-in-the-loop principles, data storage and retention policies, and transparency in AI decisions.
Q2: Can firms actually innovate faster if they start with compliance?
Bryan: Yes. By establishing clear AI adoption policies and establishing systematic guardrails for secure and responsible AI use, firms are able to move much faster because they have confidence that they are not violating security, legal, or regulatory requirements. It's similar to the "paved road" concept in enterprise security. Providing a safe path allows folks to experiment and innovate with those guardrails.
Q3: What are the biggest compliance risks when firms rush into AI?
Bryan: The most common gaps we've seen relate to a lack of understanding with how vendor-delivered AI features actually operate. It's tempting to enable AI features, but doing so without a thorough understanding of what data it uses, where that data gets stored and for how long, what LLMs are used, and what safeguards are in place to protect your data can lead to bad outcomes. We are firm believers in going slow to go fast, and recommend firms to get an adequate understanding of AI risks prior to enabling.
Q4: How are regulators approaching AI in financial services right now?
Bryan: Regulators are shifting quickly from guidance to enforcement. The SEC is already penalizing firms for “AI washing” and misuse of predictive analytics, while the EU AI Act requires strict oversight for high-risk systems. Globally, standards like NIST’s AI RMF are setting expectations for explainability and governance. While the current regulatory environment in the U.S. is lax, we expect regulations to increase in the short term. We are seeing certain states, like Colorado, propose their own AI regulations. It's not a matter of if, but a matter of when, AI regulations materially impacts the asset management industry.
Q5: What questions should firms be asking vendors about AI compliance?
Bryan: Firms should understand how vendors develop their AI systems and what policies and principles are in place to ensure secure and responsible deployment. More specifically, firms should be assured that their data is not being used to train models, what third parties have access to their data, whether audit trails are in place to capture key events, and how granular access control measures can be applied.
Q6: Why are immutable audit trails so important for compliance?
Bryan: Immutable audit trails provide a permanent, time-stamped record of every action and decision. They enable the verification of compliance, reconstruction of events for regulators, and provide a record for investigation if needed to revisit how an action was performed.
Q7: What are the right questions to ask technology partners about AI?
Bryan: Third-party risk continues to grow in importance. Now is the time to thoroughly understand the data sub-processors used by your technology partners, including how they store data, what data retention measures are in place, and how they safeguard against security risks, such as advanced phishing attacks and supply chain security. Answers to these questions will help you understand where your data is stored, who can access it, and where you have areas of risk in your supply chain.
Want to learn more about Ridgeline and our AI functionality for investment management firms? Request a demo or send us an email at hello@ridgelineapps.com.




