Skip to content

Services / Training

Bespoke AI training, scoped to your business and your people.

Designed from the top down. Vertically across leadership, function heads, and team. Horizontally across the functions and individuals that matter most. Anchored in your real work, paced to your business's AI Grading.

Definition

What is bespoke AI training?

Bespoke AI training is a scoped programme that builds working AI capability across a business. Every engagement is designed top-down, starting with a short leadership conversation on governance and approval, then layered across the functions and individuals that matter most. Content, pacing, and depth are scoped to the business's current AI Grading — a three-stage capability framework (Grade 1 unstructured use, Grade 2 consistent individual practice, Grade 3 shared business-wide capability) — and to the literacy of the people in the room. Training is most effective when it follows an AI Transformation Roadmap or runs alongside an AI Consulting engagement, so the work being taught is the work the business has decided to do.

The problem

Why most AI training does not stick

Most AI training does not stick because it is sold as a workshop rather than a capability change. A vendor came in, ran a half-day session, showed the team how to use ChatGPT, and left. Two weeks later, a handful of people were still using it. The rest had drifted back to their old workflow.

The pattern is predictable. Off-the-shelf training treats the business as a single audience and ignores three things at once: leadership has not agreed what is approved, different functions need different content, and individuals start at different levels of AI literacy. A one-day workshop addresses none of the three.

AI Grading

The AI Ability Grade

Three grades describe the capability arc a programme is designed to move a business through. Every engagement targets a specific grade progression, scoped to where the business starts.

  1. Grade 1

    Unstructured use

    AI tools used individually, ad hoc, with no shared conventions. Some team members use Copilot or ChatGPT; most do not. No business-wide guidance on what is approved for which work.

  2. Grade 2

    Consistent individual practice

    Every team member can navigate the business's AI tools confidently and apply them to their core work. A trust-but-verify habit is established. Individuals can experiment independently.

  3. Grade 3

    Shared capability

    A shared mental model across the business, a consistent approach to prompting, and agreed governance. Leadership understands what they are endorsing and why. AI use is normalised rather than ad hoc.

A typical engagement moves a business from Grade 1 to Grade 2 or 3 depending on scope. Businesses already at Grade 2 usually engage to lock in shared governance and reach Grade 3.

How we scope

Vertical and horizontal, never one-size

Every engagement is scoped along two axes.

Vertical

Top-down, from leadership to working level

The leadership track comes first. Partners, directors, and the executive team need a defensible position on tool approval, client confidentiality, and shadow AI before the rest of the business starts using the tools in earnest. The team track follows with that endorsement already in place.

Horizontal

By function, role, and starting literacy

Different functions have different workflows. A deal team and a compliance team get different worked examples. Different individuals start at different levels of AI literacy. We scope the content to who is actually in the room, anchored in the work they actually do.

What you get

Modular building blocks, scoped to fit

Every programme is built from the same set of components. The mix depends on the business and the target grade.

  • Leadership governance conversation

    A focused, decision-oriented session with the leadership team before the wider training begins. Outputs are the business-wide guidelines the leadership is comfortable putting their names to: which tools are approved, for which work, at what level of data sensitivity.

  • Team training sessions

    A series of one-to-three structured sessions for the whole team or a defined function. Each session combines live exercises against the team's real work, a between-session activity, and a short review at the next session. Sessions are short and frequent rather than long and rare.

  • Leadership track

    A parallel track for partners, directors, or executives covering the same curriculum with more room for peer-level discussion on governance, business-wide adoption, and how to model the behaviour they want to see in the team.

  • One-to-one coaching

    Optional individual sessions for senior people who want their own implementations, prompt templates, or workflows reviewed directly by a SeidrLab partner. Useful where a senior leader is the constraint on adoption.

  • Written AI policy

    Optional written policy delivered as a programme output. Documents the business's agreed position on tool approval, client confidentiality, shadow AI, and consumer vs enterprise products. Reviewed and signed off by leadership before the engagement closes.

  • Debrief and embedment

    A structured debrief two-to-four weeks after the final session. We measure what is sticking, surface where adoption is drifting, and produce a short written recommendation on what to invest in next.

Fit

Who bespoke AI training is built for

This is the right engagement if:

  • You are a managing partner, CEO, COO, or chief of staff, and AI adoption inside the business is uneven or stalled.
  • Your team has access to AI tools (Microsoft Copilot, Claude, ChatGPT, or a vertical tool) but is not using them on real work in any structured way.
  • You want behaviour change at both the leadership level and the working level, not just AI literacy in the analyst layer.
  • You want the training anchored in your business's actual work, scoped to the literacy and AI Grading of the people in the room.
  • You want leadership to come out of the engagement with a defensible position on tool approval, client confidentiality, and shadow AI.

This is not the right engagement if:

  • Your business has no shared tool stack and no plan to choose one. Training before tool selection produces drift.
  • You expect AI training to replace headcount immediately. The work produces leverage, not redundancies, in the first twelve months.
  • You want a vendor to validate a training plan you have already designed. SeidrLab will challenge the brief when it needs challenging.

Scope and pricing

Every engagement is quoted to your business

Training engagements vary by scope: the number of tracks, the depth of customisation, the functions in scope, and whether the engagement includes one-to-one coaching or a written AI policy. Every engagement is quoted as a fixed fee against a defined scope, agreed in writing before signing. No hourly billing. No surprise invoices.

A 30-minute discovery call is the fastest way to get a realistic number. We will assess your starting AI Ability Grade, scope the right shape for the engagement, and follow up with a proposal you can take to the leadership group.

Who you'll work with

Senior partners deliver every engagement

The partner who scopes the engagement is the partner doing the work.

FAQ

Frequently asked questions

What is bespoke AI training?

Bespoke AI training is a scoped programme that builds working AI capability across a business from the top down, anchored in the business's real work rather than generic examples. Every engagement is scoped to which functions and roles are in scope, what their starting AI Grading is, what tools they already have access to, and what good looks like at the end. Programmes typically combine a short leadership governance conversation, a small number of team sessions, an optional leadership track, optional one-to-one coaching, and an optional written AI policy.

How do you scope an AI training programme?

We scope along two axes: vertical (how the engagement layers across leadership, function heads, and the broader team) and horizontal (which functions and roles are included, and what their existing AI literacy looks like). We assess your starting position against the AI Ability Grade — Grade 1 (unstructured use), Grade 2 (consistent individual practice), Grade 3 (shared business-wide capability) — agree the target grade, and build the programme from there.

What is the AI Ability Grade?

The AI Ability Grade is a three-stage AI Grading framework SeidrLab uses to scope every training engagement. Grade 1 is ad-hoc, individual use with no shared conventions. Grade 2 is consistent individual practice across the team, with a trust-but-verify habit established. Grade 3 is shared capability with agreed governance and leadership endorsement. We assess the business's current grade during discovery and design the programme to move the team to the next grade or two.

How long does an AI training programme take?

Most SeidrLab AI training programmes run across four-to-six weeks of elapsed time. The team's actual time commitment is kept light: typically three short sessions per track, plus a leadership governance conversation, plus an optional debrief. Longer programmes are possible where a written AI policy or one-to-one coaching is included in the scope.

How much does bespoke AI training cost?

Every SeidrLab training programme is quoted as a fixed fee against a defined scope, agreed in writing before signing. The fee depends on the number of tracks, the depth of customisation, and whether the engagement includes one-to-one coaching or a written AI policy. The fastest way to get a realistic number is a 30-minute discovery call.

What AI tools do you train on?

SeidrLab is tool-agnostic and trains on the stack the business already has. For most businesses that is Microsoft Copilot, Anthropic Claude, or OpenAI ChatGPT, often paired with a vertical tool. We will recommend a stack as part of the programme if the business has not standardised. We do not resell AI platforms and we do not have a referral relationship with any AI vendor.

Will the training work for leaders who are sceptical of AI?

Yes — senior scepticism is usually credibility-driven, not ideological. The leadership conversation is designed for that audience: decision-oriented, anchored in real work, no generic demos. We bring real outputs from real work so the room can judge the quality directly rather than taking a vendor's word for it.

Will you customise the training for our specific industry?

Yes — every SeidrLab training engagement is anchored in the business's real workflows. For an M&A advisory business that means pitch preparation, IM drafting, and bidder research. For a law business that means precedent research and contract review. For an industry association it means member services and reporting. The mechanics of the programme are the same; the worked examples are business-specific.

How do you handle confidentiality and data residency?

Standard NDA on every engagement. The training is designed around the data residency rules of the business, not the other way around. We will not ask anyone to put confidential client data into a consumer-grade tool. Where businesses use an enterprise Copilot or Claude deployment with appropriate data agreements, we work inside that boundary.

How do you measure the success of AI training?

SeidrLab measures active use of the tools in real work, at the end of the engagement and again four-to-six weeks later. We use self-report at the team level and, where the business allows, usage data from the AI tools themselves. We do not measure success by attendance, NPS scores, or post-workshop survey enthusiasm.

A 30-minute call gets you a clear answer on scope.

We will ask three questions: where your business sits on the AI Ability Grade today, which functions and individuals matter most, and what good looks like at the end. By the end of the call you will have a shape for the engagement, or an honest view of why a training programme is not the right next step.