6 Feb 2026
Harry Peppitt
Last Updated: 6 Feb 2026
9 min read
3,512 words
So you've decided to bring in external AI expertise. Now you're looking at 5-10 consulting firms, all claiming they can "transform your business with AI." The websites look similar. The pitches sound the same. Everyone has "proven methodologies" and "senior teams."
How do you separate the firms that will actually deliver from the ones that will bill you for six months and leave you with a PowerPoint deck?
This guide guides you through the evaluation framework and specific questions to ask. Use it to assess any AI consultant and make a confident decision. If you're still struggling, you can always give us a call too.
What Makes a Good AI Consultant?
Before we get to questions, understand what you're actually hiring for. AI consultants serve three distinct functions, and most claim to do all three:
Strategic Advisory
Help you figure out where AI fits in your business, build roadmaps, create governance frameworks. This requires business acumen, industry knowledge, and the ability to translate technical capabilities into business outcomes.
Technical Delivery
Design and build AI systems that actually work in production. This requires engineering expertise, project managment, integration experience, and the ability to navigate your specific technical environment.
Change Management
Ensure your team actually uses the AI systems you build. This requires organisational psychology, training capability, and stakeholder management skills.
Some consultants are strong in one area and weak in others. Some outsource parts (senior partner sells strategy, junior contractors do the build). Some are genuine full-stack partners.
You want to figure out which is which before you sign a contract to avoid headaches down the line.
The 10 Critical Questions
Ask these questions in your discovery calls. The answers will tell you everything you need to know.
1. "Can you show me 3 recent projects with specific, measurable outcomes?"
What you're testing: Whether they deliver real results or just process and powerpoints.
Good answers include:
Specific metrics: "10x pipeline increase," "$500K annual cost savings," "40% time reduction"
Timeline: "Delivered in 12 weeks"
Client context: "Mid-market real estate firm, 50 employees"
Technical approach: "Integrated Salesforce with Apollo and CoreLogic via AWS"
Red flags:
Vague outcomes: "improved efficiency," "enhanced capabilities," "better insights"
No metrics: "The client was very happy"
Only brand-name clients: "We worked with Google" (you're not Google, that's not relevant)
Can't share any details: "Everything is confidential" (they should have permission to discuss at least 2-3 projects)
Why this matters: Consultants who can't articulate specific outcomes from recent work either don't track results or don't deliver them.
2. "Who will actually do the work on my project?"
What you're testing: Bait-and-switch risk.
Good answers:
Names and roles of the actual delivery team
Backgrounds and expertise of each person
How much time each person will dedicate to your project
Clear delineation between who sells and who delivers
Red flags:
"Our team of experts" without naming anyone
Partner sells the work, then disappears and you never see them again
Junior consultants doing everything while charging senior rates
Team composition is "TBD" or "depends on availability"
Why this matters: The most common complaint about big consulting firms is bait-and-switch. The experienced partner wins the work, then junior staff (who are learning as they go) actually do it.
Follow-up question: "Can I meet the delivery team before signing?" If they say no, walk away.
3. "What's your technical approach and methodology?"
What you're testing: Whether they have a repeatable framework or make it up as they go.
Good answers:
Specific phases with clear deliverables: "We start with a 2-week assessment, then build a roadmap, then execute pilots"
Explanation of how they make technology choices: "We assess your existing stack, evaluate 3-5 options, then recommend based on cost, integration complexity, and maintainability"
References to proven patterns: "We've built this integration 15 times, here's our standard approach"
Customisation boundaries: "We adapt the framework to your needs, but these phases are non-negotiable"
Red flags:
"We tailor everything to each client" (translation: no proven patterns, you're the guinea pig)
"We use cutting-edge AI" without explaining what that means for your specific problem
Methodology sounds like it came from a consulting textbook with no adaptation to AI work
Can't explain their technical stack or why they chose it
Why this matters: Consultants without proven methodologies take 2-3x longer and cost 2-3x more because they're inventing the process as they go.
Follow-up question: "Can you show me your typical project timeline and deliverables for a project like mine?"
4. "What happens after the engagement ends?"
What you're testing: Whether they plan for sustainability or just build and leave.
Good answers:
Explicit handoff plan: "We train your team for 2 weeks, document everything, then provide 30 days of post-launch support"
Maintenance options: "You can maintain internally, sign up for our support retainer, or use a hybrid model"
Knowledge transfer: "All code is yours, documentation included, we run training sessions for your team"
Ongoing relationship: "Most clients start with a Sprint project, then move to monthly advisory for ongoing support"
Red flags:
"The system is turnkey" (nothing is turnkey, all systems need maintenance)
No discussion of handoff or training
Maintenance requires their ongoing involvement at high cost
Documentation and knowledge transfer not included in base price
Why this matters: AI systems require ongoing maintenance. Consultants who don't plan for post-engagement sustainability leave you dependent on them indefinitely or with a system that breaks as soon as they leave.
Follow-up question: "What does internal maintenance require from our team? What skills do we need?"
5. "Have you worked in our industry before?"
What you're testing: Whether industry experience is critical for your project.
Here's the nuance: Industry experience helps but isn't always required. It depends on your project.
When industry experience matters:
Highly regulated industries (healthcare, finance, legal)
Industry-specific data sources or integrations
Unique workflows that don't translate across sectors
Complex compliance requirements
When industry experience matters less:
Common workflow automation (lead gen, CRM integration, reporting)
General AI strategy and governance frameworks
Problems that exist across industries (manual data entry, pipeline management)
Good answers:
Honest about their experience: "We haven't worked in commercial real estate specifically, but we've built lead generation systems for 10 different industries. Here's how we'd adapt our approach."
Relevant transferable experience: "We haven't worked with law firms, but we've done document automation for accounting firms and consulting practices."
Clear plan to learn your context: "We'd spend the first 2 weeks understanding your specific workflows before designing anything."
Red flags:
Claim expertise in 20 industries (nobody is an expert in everything)
Can't explain how they'd learn your industry context
Dismiss industry specifics as irrelevant: "AI is AI, doesn't matter what industry"
Why this matters: You want consultants who understand when industry context is critical and have a plan to acquire that knowledge if they don't have it already.
6. "What's your stance on build vs. buy?"
What you're testing: Whether they're technology-agnostic or pushing specific vendors.
Good answers:
Framework for making build vs. buy decisions: "We evaluate based on cost, customisation needs, integration requirements, and your internal capability"
Examples of both: "For Client A we recommended Salesforce Einstein because they were already on Salesforce. For Client B we built custom because their workflows were too specific."
Transparent about trade-offs: "Off-the-shelf is faster and cheaper but less flexible. Custom is more expensive but fits your exact needs."
No vendor bias: "We're tool-agnostic. We recommend what works for your situation."
Red flags:
Push one platform exclusively: "Everything should be on [vendor]"
Financial relationships with vendors they recommend that aren't disclosed (reseller agreements, referral fees)
"Build custom" for everything (they make more money building)
"Buy off-the-shelf" for everything (they avoid technical work)
Why this matters: Technology-agnostic consultants recommend what's best for you. Biased consultants recommend what's best for them (higher fees, vendor kickbacks, easier delivery).
Follow-up question: "Do you have any financial relationships with the vendors you recommend?" It's ok to have these, but if they hesitate or deflect, they're trying to hide it.
7. "How do you handle scope changes?"
What you're testing: Flexibility and transparency in contracting.
Good answers:
Clear change order process: "If scope increases, we provide a written change order with cost and timeline impact for signoff before proceeding"
Examples of common changes: "Most projects have 1-2 change orders. Usually it's adding an integration or expanding to another department."
Fixed-price boundaries: "These items are in scope. These are explicitly out of scope. Anything else requires a change order."
Retainer flexibility: "On retainer engagements, we have built-in flexibility for scope adjustments month to month"
Red flags:
Vague scope in initial proposal: "We'll figure it out as we go"
Time and materials with no cap: "We bill whatever it takes"
Aggressive resistance to any changes: "The scope is locked, any change doubles the price"
No clear process for handling changes
Why this matters: Every project has some scope evolution. You need a transparent, fair process for handling it.
Follow-up question: "Can you show me a sample change order from a previous project?"
8. "What if this project doesn't work?"
What you're testing: Honesty about risk and contingency planning.
Good answers:
Acknowledge specific risks: "The main risk is data quality. If your CRM data is incomplete, we'll need to clean it first."
Mitigation plans: "We'll do a 1-week data audit before committing to the full build. If data quality is a blocker, we'll tell you upfront."
Honest about uncertainty: "We can't guarantee 10x results. We can guarantee a working system. The ROI depends on your team's adoption."
Kill criteria: "If we hit these specific roadblocks, we'll recommend pausing or pivoting rather than continuing to burn budget."
Red flags:
Guarantee success: "This will definitely work, no risk"
Blame the client for failures: "It only fails if you don't follow our recommendations"
No contingency plans: "We haven't thought about that"
Dismiss valid concerns: "That won't be an issue"
Why this matters: Consultants who won't discuss risk are either naive or dishonest. Both are bad.
Follow-up question: "What's the most common reason projects like mine fail, and how do you prevent that?"
9. "Can I talk to some reference clients?"
What you're testing: Whether they have happy clients willing to vouch for them.
Good answers:
Immediate yes with contact details
Context about each reference: "This client did a similar project, this one is in your industry, this one is a long-term advisory relationship"
Proactive offer: "Here are three references. Feel free to ask them anything."
Red flags:
Refuse to provide references: "All our work is confidential"
Provide references but they're unresponsive (they were told to dodge your calls)
Only provide references for engagements from 2+ years ago (recent clients aren't happy)
References are generic testimonials, not actual contacts
Why this matters: Happy clients talk to prospects. Unhappy clients don't. If a consultant won't provide references, it's because their clients won't say good things.
What to ask references:
"Was the project delivered on time and on budget?"
"Did the same team that sold the work actually do the work?"
"What would you do differently if you were choosing a consultant again?"
"Would you hire them again for another project?"
10. "What's your typical engagement timeline?"
What you're testing: Whether their timelines are realistic.
Good answers (for mid-market projects):
Assessment: 2-4 weeks
Strategy: 4-6 weeks
Pilot project: 8-12 weeks
Acknowledgement of variables: "This assumes reasonable data quality and clear decision-making. If you have legacy systems or complex approval processes, add 25-50%."
Red flags:
"AI transformation in 2 weeks" (not happening)
"It depends" without any ballpark estimate
Timelines that sound too good to be true (they are)
No acknowledgement of complexity or risk factors
Why this matters: Unrealistic timelines indicate inexperience. Consultants who've done this before can estimate accurately.
Follow-up question: "What factors typically cause timeline delays, and how do you mitigate them?"
Red Flags: When to Walk Away
Some warning signs should end the conversation immediately:
Consultant Won't Discuss Pricing Ranges
If they refuse to provide even ballpark numbers before the first call, they're either hiding high prices or don't have pricing discipline. Either way, walk away.
Junior Team Doing Senior Work
If the person selling the work won't be doing the work, and they can't prove the delivery team has relevant experience, that's a bait-and-switch. Walk away.
Vague Deliverables
"We'll help you with AI strategy" is not a deliverable. "AI Strategy Roadmap (15-page document with phased initiatives, success criteria, and budget allocation)" is a deliverable. Vague proposals lead to scope disputes.
No Post-Engagement Plan
Consultants who don't discuss handoff, training, and ongoing support will leave you with a system you can't maintain. Insist on explicit knowledge transfer.
Guaranteed Outcomes
"We guarantee 10x ROI" is a lie. AI delivers measurable value, but nobody can guarantee specific business outcomes. Consultants who promise the moon are either naive or dishonest.
Pressure to Sign Quickly
"This price is only available if you sign this week" is a sales tactic, not a legitimate constraint. Consultants who pressure you to skip due diligence don't deserve your business.
Green Flags: Consultants Who Get It
Look for these signs of a strong partner:
Transparent Pricing
They provide clear pricing ranges, explain what drives cost, and discuss trade-offs between investment levels. No games.
Technology-Agnostic
They recommend tools based on your needs, not their preferred stack or vendor relationships. They can articulate pros and cons of multiple approaches.
Honest About Limitations
They tell you when they're not the right fit, when your project isn't ready, or when your expectations are unrealistic. Honesty over revenue.
Proven Team Continuity
The people who sell the work do the work, or they have a stable delivery team with low turnover and proven experience.
Systematic Approach
They have a clear methodology, proven deliverables, and can show you similar projects they've completed successfully.
Client References Readily Available
They offer references proactively and their clients are genuinely happy to talk about their experience.
Evaluation Scorecard
Use this scorecard to rate each consultant you're evaluating (1 = Poor, 5 = Excellent):
Criteria | Weight | Score | Notes |
|---|---|---|---|
Specific outcomes from recent projects | 5x | __/5 | |
Team transparency and continuity | 5x | __/5 | |
Clear methodology and technical approach | 4x | __/5 | |
Post-engagement sustainability plan | 4x | __/5 | |
Relevant industry or problem experience | 3x | __/5 | |
Technology-agnostic recommendations | 4x | __/5 | |
Transparent scope and change management | 3x | __/5 | |
Honest about risks and limitations | 4x | __/5 | |
Reference clients available and positive | 5x | __/5 | |
Realistic timelines and expectations | 3x | __/5 |
Scoring:
180-200: Excellent fit, move forward with confidence
140-179: Good fit with some concerns, dig deeper on low-scoring areas
100-139: Mediocre fit, consider other options
Below 100: Walk away
The weights reflect importance. Team transparency and proven outcomes are 5x because they're non-negotiable. Industry experience is 3x because it's helpful but not always critical.
How to Structure Your Evaluation Process
Phase 1: Initial Research (Week 1)
Shortlist 3-5 firms based on website, referrals, or directory research
Review their content (blog posts, case studies, methodology docs)
Check for red flags (vague promises, no real content, generic positioning)
Eliminate obvious mismatches
Phase 2: Discovery Calls (Week 2)
Schedule 30-60 minute calls with remaining firms
Ask the 10 critical questions
Take detailed notes on answers
Request proposals from 2-3 companies
Phase 3: Proposal Review (Week 3)
Review proposals for clarity, specificity, and value
Check for deliverables vs. vague promises
Understand pricing structure and what's included
Request reference contacts
Phase 4: Reference Checks (Week 3-4)
Talk to 2-3 references for each finalist
Ask about delivery, team continuity, outcomes, and value
Look for consistent stories (good or bad)
Phase 5: Final Decision (Week 4)
Complete evaluation scorecard for each finalist
Compare on value, not just price
Consider cultural fit and communication style
Make decision and notify all firms
Timeline: 4 weeks from initial research to signed contract. Rushing this process leads to expensive mistakes. Taking longer than 4 weeks means you might be overthinking it.
What About Price?
You'll notice price isn't one of the 10 critical questions. That's intentional.
Here's why: price is an output of scope, quality, and team seniority. If you lead with price, you'll either get lowball bids from consultants who'll cut corners, or you'll choose based on cost and regret it later.
Instead, evaluate on fit and capability first. Then assess whether their pricing is reasonable for the value delivered.
General Investment Expectations:
AI consulting engagements vary widely based on scope and complexity. Strategic work (assessments, roadmaps, advisory) typically requires upfront investment before you see implementation ROI. Project-based work (pilots, integrations, automation) scales with technical complexity. Long-term partnerships (embedded teams, ongoing advisory) require sustained monthly commitment.
For mid-market companies, meaningful AI initiatives typically start in the tens of thousands for focused projects, with comprehensive programs reaching six figures annually.
If a consultant's pricing seems dramatically higher or lower than others, understand why. They may price at a premium because of reputation, past work or resourcing availabilitiy. The cheapest option almost always costs more in the long run.
For more context on engagement models, see What is AI Consulting?
Common Questions About Choosing Consultants
Should I hire a large firm (Big 4) or boutique consultant?
Large firms: Better for enterprise-scale implementations, global coordination, or when you need the brand name for internal politics. Expect to pay 2-3x more, powerpoint deliverables, and get junior staff doing most of the work.
Boutique firms: Better for mid-market companies, faster decision-making, direct access to senior expertise, more flexible and more cost-effective. Less name recognition but often better delivery.
For mid-market companies ($1M-$100M ARR), boutique firms usually deliver better value.
What if I can't get reference contacts?
If a consultant won't provide reference contacts, either their clients aren't happy or their work is too new to have references. Either way, that's a significant risk. Only proceed if they can provide alternative proof (detailed case studies, portfolio work, relevant team backgrounds).
Should consultants guarantee results?
No. Consultants can guarantee deliverables (a working system, a strategy document, training sessions). They can't guarantee business outcomes (10x revenue, 50% cost reduction) because those depend on factors outside their control (your team's adoption, market conditions, data quality).
Be suspicious of consultants who guarantee specific business outcomes.
How do I know if the methodology is good?
Ask them to walk you through a recent similar project: what phases, what deliverables at each phase, how long each took, where they encountered issues. A good methodology should have clear gates between phases, specific deliverables, and proven patterns for handling common problems.
What if my project is unique and they haven't done it before?
No project is entirely unique. Look for transferable patterns. If you need real estate lead automation and they've done lead automation for other industries, that's relevant. If they have zero experience with anything similar, that could be a risk.
What Makes SeidrLab Different
Since you're evaluating consultants, here's how we stack up against the criteria in this guide:
Transparent Outcomes
We can provide specific results with client names (where permitted). 10x pipeline increases, 40% time savings, specific investment figures. Not vague "efficiency improvements."
Team Continuity
Small team, senior consultants only. The partners who sell the work do the work. No bait-and-switch, no junior staff learning on your time. We also utilise a network of specialised trusted technical partners who can supplement our skills and workload for whatever reason.
Transparent Pricing
We will supply pricing ranges up front. Fixed-price Sprints, clear retainer scopes. You know what you're paying and what you're getting before the first call.
Technology-Agnostic
We don't resell tools or have vendor relationships. We recommend what works for your team and environment, whether that's off-the-shelf platforms or custom builds.
Proven Methodology
We've been doing this for a while. Every project starts from battle-tested patterns. You get operational systems faster because we're not inventing the approach from scratch.
If you want to evaluate us against other firms, feel free to use this guide. We welcome the scrutiny.
For more context, see What is AI Consulting? and The 4-Phase AI Adoption Framework.
Conclusion
Choosing an AI consultant isn't about finding the biggest firm or the cheapest option. It's about finding the partner who will deliver real systems that solve real problems, then ensure you can maintain them long-term.
Use the 10 critical questions, watch for red flags and green flags, complete the evaluation scorecard, and make a decision based on fit and capability, not just price.
The right consultant will accelerate your AI initiatives, de-risk your investment, and build your internal capability. The wrong consultant will burn six months and half your budget, then leave you with nothing to show for it.
Choose carefully.
About SeidrLab
SeidrLab is a boutique AI consultancy helping mid-market companies ($1M-$100M ARR) adopt AI systematically. We combine strategic advisory with hands-on technical delivery across three service models: AI Advisory (retainer), Sprint-Based Projects (fixed-scope), and Embedded Engagements (long-term team augmentation). Our clients include private equity firms, real estate companies, and professional services organisations.
Learn more.




