The AI-literate leader: a boardroom non-negotiable in 2026

AI is no longer a side project or a future innovation agenda item, in most firms, it is already embedded in day-to-day tasks. From drafting client emails and summarising meetings, enabling research and analysis, product development as well as operational reporting. The board-level question has moved on from “should we invest?” to something more like, “are we scaling AI safely, responsibly and profitably?”

This shift in attitude is prompted by the clear gap between AI investment and AI maturity. McKinsey’s 2025 workplace research points out that while most companies are investing, only around 1 % believe they’re anywhere near maturity. They point to the biggest hurdle not being employees rather leadership that isn’t steering fast enough. Coupling this to the Stanford 2025 AI Index showing the increased pace: 78 % of organisations used AI in 2024, up from 55 % the year before and you see why an AI literate leader is becoming key.

According to Phryne Williams, CEO of Capital Assignments: “AI isn’t a technology story it’s a leadership story. It only creates value when someone is willing to re-think the work, set sensible guardrails, and bring people with them.” 

Why AI literacy matters in South African financial services

Financial services carry a unique set of pressures: high trust, regulated decision making and outcomes that affect people’s savings, claims and credit. That reality raises the bar. A ‘move fast and break things’ mindset rarely survives contact with auditors, compliance teams and clients and it certainly won’t hold when AI begins influencing decisions that matter.

In practice, a lot of early AI activity is sensible and low-risk: summarising documents, tightening communication, speeding up research and reducing admin. The problem starts when boards assume that tool adoption equals organisational capability. It doesn’t. Capability is only truly evident when AI begins to influence client outcomes, operational decisions and the compliance environment. “Boards don’t need leaders who can talk about AI, they need leaders who can turn spend into measurable performance, without compromising trust, culture or control,” says Williams. 

From tool adoption to operating change

While the temptation for quick wins, through pilots or licence arrangements is real, these rarely produce sustained value. The difference between activity and impact is whether leadership is willing to change how work moves from beginning to end.

BCG’s 2025 AI at Work research makes the point clearly: while usage may have gone mainstream, real business value requires intense and time-consuming workflow redesign. Simply layering tools onto existing processes is not effective.

In banking, that redesign could mean reworking a credit journey from application to outcome, with clearer hand-offs, fewer manual checks and better escalation. In insurance, it might mean modernising the claims journey, so clients get speed and transparency, without reducing oversight. In wealth management, it could mean improving research and client reporting, so advisers spend more time advising and less time on the physical act of typing up reports. Williams says, “AI doesn’t fix inefficient workflows, it exposes them. Leaders who can redesign the work are the ones who unlock the maximum value and leave their competitors in the dust.” 

Where culture and risk collide

Another contributor to ‘AI immaturity’ is a two-speed organisation. Executives and managers tend to adopt quickly. The operational teams however, either adopt, get left behind or find their own workarounds, using tools in inconsistent ways. The result is disparity in the quality of work, little clarity on how it’s being produced, and frustration between teams and leaders. 

BCG’s survey findings highlight that frontline adoption can stall when job impact fears rise. Microsoft’s Work Trend Index adds the practical reality many leaders are already seeing where employees are bringing their own AI into work, while many organisations still lack a clear plan to move from individual usage to bottom-line impact.

This is where maturity either accelerates or collapses. If guardrails are unclear, people work around them. If workflows don’t change, AI becomes one more layer of work. If training is vague or optional, adoption becomes inconsistent and unpredictable. “When AI scales badly, it doesn’t just create inefficiency, it creates mistrust. And once trust drops, adoption becomes harder,” comments Wiliams.

How boards should hire for 2026

When boards recruit CEOs, COOs, CIOs or divisional leaders this year, the question shouldn’t be ‘do you know AI?’ It should be ‘how would you scale AI responsibly in this business and what would you change first?’ Below is a practical executive search lens Williams recommends using across interviews, referencing and assessment:

1. AI judgement

Strong candidates show prioritisation coupled with restraint and they are able to explain which use cases matter, why they matter, what they’d stop and what must remain human led. They also define value clearly, including what success looks like within a giving timeframe.

2. Risk governance

In financial services, governance considerations are often seen as the pacesetter for the business. The right leader can put practical protections in place: data handling rules, approved tools, human review thresholds, escalation paths and auditability. Correctly setting this in place will allow governance to facilitate scale and not stall it.

3. Change leadership

AI transformation is behavioural change at scale. Leaders need to redesign workflows, build adoption discipline and measure impact. It’s relatively easy to pilot an initiative, true leadership is evident in movement from pilot to scale.  Change leadership can deal with resistance without losing momentum.

4. The ability to undqerstand the data

This isn’t about coding. It’s about understanding the data reality of the business: quality, access, ownership, permissions and measurement. Many AI programmes fail when implementation comes up against data that is less than optimal. Strong leaders fix the data foundations before scaling.

5. Upskilling mindset

AI maturity becomes durable when AI becomes a company-wide capability. Leaders need to build confidence and competence across roles, not create dependency on a few power users. Enablement should be role based and tied to everyday work, not a once-off training session.

What this means for executive recruitment

Your hiring process is a preview of how the organisation will run its AI agenda. If the brief is vague, decision making is slow, or accountability is unclear, AI scaling will follow the same pattern.

Instead of asking for ‘AI exposure’, assess for evidence: outcomes delivered, adoption achieved, and risk managed. Ask what changed in the operating model and what impact it had on performance. In regulated environments, test how candidates handled policy, oversight, and human re view when the stakes were high. Williams concludes, “Hiring for AI-literate leadership is not about finding a ‘tech executive’. It is about appointing leaders who modernise how work gets done.” 

Read Previous

Strong starts matter: the case for literacy and numeracy in early schooling

Read Next

Why franchising remains a lower-risk path for SMEs in 2026?

Most Popular

Share via
Copy link