3 Questions Every CCO Should Be Asking About AI Use Right Now

Artificial intelligence is no longer a future-state conversation for Chief Compliance Officers (CCOs) – it’s a present-day responsibility.

Across investment firms, AI is being deployed in portfolio management, trading, compliance monitoring, marketing, and client communications. At the same time, regulatory expectations are accelerating – even without formal AI-specific rules.

The result?
A growing gap between AI adoption and AI governance.

In fact, AI and predictive analytics are now ranked as the top compliance concern among investment advisers, according to industry survey data.

Meanwhile, regulators like the U.S. Securities and Exchange Commission are signaling increased scrutiny through initiatives like AI task forces, exam priorities, and enforcement actions.

So what should CCOs actually be asking right now?

This post breaks down three critical questions every CCO should be asking about AI use today—and what “good” looks like in practice.


Why This Matters Now

Before diving into the questions, it’s important to understand the current regulatory posture:

  • The SEC has launched a dedicated AI Task Force to oversee AI usage across financial markets
  • Regulators are applying existing compliance frameworks (fiduciary duty, disclosure, supervision) to AI
  • Examinations increasingly focus on AI-related policies, procedures, and disclosures
  • Enforcement actions are already targeting misleading AI claims (“AI-washing”)

There’s no ambiguity here:
CCOs are expected to understand, oversee, and govern AI—today.


Question 1: “Do We Actually Know Where and How AI Is Being Used?”

This sounds simple. It’s not.

In most firms, AI usage is:

  • Decentralized
  • Embedded in tools
  • Sometimes invisible to compliance

Where AI Is Hiding

AI is often already in use across:

  • Portfolio construction tools
  • Trade surveillance systems
  • Marketing automation platforms
  • CRM systems
  • Vendor SaaS platforms

And increasingly:

  • Generative AI tools (e.g., content drafting, research assistance)

The risk isn’t just usage—it’s unknown usage.


Why Regulators Care

The SEC has emphasized the importance of AI use case inventories, even internally, as part of its governance approach

At the same time, regulators have made it clear that:

Firms are responsible for AI outcomes—even when using third-party tools

This creates a critical compliance expectation:

👉 If you can’t inventory it, you can’t govern it.


What “Good” Looks Like

A strong compliance posture includes:

  • A centralized AI inventory
  • Coverage of:
    • Internal models
    • Vendor tools
    • Embedded AI features
  • Clear mapping of:
    • Business use case
    • Owner
    • Risk level

Practical Actions for CCOs

  • Conduct an AI usage audit across departments
  • Require disclosure of AI use in vendor onboarding
  • Expand existing model inventories to include AI

👉 Internal linking opportunity:

  • What “Good” Looks Like: A Practical Framework for AI Governance in Investment Compliance

Question 2: “Are We Comfortable Explaining and Defending AI-Driven Decisions?”

This is where AI intersects directly with fiduciary duty.

In investment compliance, firms must be able to:

  • Explain decisions
  • Demonstrate suitability
  • Defend outcomes

AI complicates all three.


The Core Challenge: Explainability

Many AI systems—especially machine learning models—operate as “black boxes.”

That creates risk in areas like:

  • Portfolio recommendations
  • Trading strategies
  • Client segmentation
  • Compliance alerts

Regulatory Expectations

The SEC has indicated that when firms use AI in:

  • Portfolio management
  • Trading
  • Marketing
  • Compliance

…examinations may focus on:

  • Policies and procedures
  • Disclosure to investors
  • Decision-making processes

Additionally, liability concerns remain one of the biggest barriers to AI adoption:

Advisers fear exposure if AI-driven decisions lead to losses


The Disclosure Problem

AI also introduces new risks around misrepresentation:

  • Overstating AI capabilities (“AI-washing”)
  • Inconsistent or unclear disclosures
  • Lack of alignment across communications

Regulators have already brought enforcement actions tied to misleading AI claims


What “Good” Looks Like

A strong compliance posture ensures:

  • AI-driven decisions are:
    • Explainable
    • Documented
    • Auditable
  • Disclosures are:
    • Accurate
    • Consistent
    • Reviewed through compliance workflows

Practical Actions for CCOs

  • Require documentation of AI decision logic
  • Implement human-in-the-loop controls for high-risk use cases
  • Review:
    • Marketing materials
    • Investor communications
    • Product disclosures

👉 Key question to ask internally:
“If the SEC asked us to explain this decision tomorrow, could we?”


Question 3: “Do We Have Governance and Controls That Actually Scale?”

Even firms with AI policies often fail here.

Why?

Because governance isn’t just documentation—it’s operational discipline.


The Reality Today

Many firms have:

  • Draft AI policies
  • General compliance frameworks

But lack:

  • Enforcement mechanisms
  • Clear ownership
  • Ongoing monitoring

At the same time, AI adoption is accelerating faster than governance maturity.


Regulatory Direction

Regulators are signaling that AI must be governed like any other business-critical system.

In fact:

  • AI must be managed with the same rigor as traditional models and tools
  • Financial regulators are developing AI-specific risk frameworks tailored to financial services

The Governance Gap

Common issues include:

  • No defined AI ownership
  • No validation processes
  • No monitoring for model drift
  • No escalation workflows

What “Good” Looks Like

Effective AI governance includes:

1. Clear ownership

  • Business owner
  • Model owner
  • Compliance oversight

2. Risk-based controls

  • Tiering by impact
  • Enhanced controls for high-risk use cases

3. Lifecycle management

  • Pre-deployment validation
  • Ongoing monitoring
  • Periodic review

4. Governance structures

  • AI governance committee
  • Model risk oversight

Practical Actions for CCOs

  • Align AI governance with existing model risk frameworks
  • Establish:
    • Approval workflows
    • Change management processes
  • Implement ongoing monitoring (not just one-time review)

👉 Internal linking opportunity:

  • Operationalizing Model Risk Management in Investment Firms

How These Questions Work Together

These three questions are not independent—they form a closed-loop governance system:

  1. Visibility → Do we know where AI is used?
  2. Accountability → Can we explain and defend it?
  3. Control → Are we governing it effectively?

Together, they align with how regulators are thinking about AI:

  • Inventory
  • Risk assessment
  • Governance
  • Monitoring

Common Pitfalls CCOs Should Avoid

Even strong compliance teams are running into the same issues:

1. Treating AI as a “technology problem”

It’s a compliance and governance issue first

2. Assuming vendors handle governance

They don’t—you’re still accountable

3. Over-indexing on policy, under-indexing on execution

Policies don’t protect you—controls do

4. Waiting for clear regulation

Regulators are already enforcing under existing rules


The Opportunity for CCOs

This isn’t just risk—it’s leverage.

CCOs who lead on AI governance can:

  • Enable faster, safer AI adoption
  • Build trust with regulators
  • Strengthen internal alignment
  • Position compliance as a strategic partner

As the regulatory landscape evolves, the role of the CCO is expanding—from oversight to strategic leadership in AI governance.


Where TillieStar Fits In

At TillieStar, we work with investment compliance teams to operationalize AI governance by:

  • Building AI and model inventories
  • Designing governance frameworks aligned with regulatory expectations
  • Implementing validation and monitoring workflows
  • Bridging compliance and technology teams

👉 Explore more: https://tilliestar.com/insights_blog/


Related Articles

Here are additional TillieStar resources that complement this topic:

👉 Browse all insights: https://tilliestar.com/insights_blog/


Final Takeaway

AI isn’t waiting for regulation—and neither should compliance.

If you’re a CCO, start here:

  • Do we know where AI is used?
  • Can we explain and defend it?
  • Are we governing it effectively?

If the answer to any of these is unclear, that’s your starting point.Because in today’s environment,
AI governance isn’t optional—it’s core to investment compliance.

Leave a comment

Your email address will not be published. Required fields are marked *