Skip to content
embedded B2B manufacturing marketplace

How a B2B manufacturing marketplace put 100 qualified companies into HubSpot every day with a production AI agent

6 weeks

Last updated:

Key results

  • 100 companies/day loaded into HubSpot
  • 45–60 min/day per rep recovered
  • Production deployed on Vertex AI Agent Engine
  • Claude CoWork installed with custom integrations

Outcomes

Efficiency gain
45-60 minutes per rep per day recovered from manual prospect research and pipeline prioritisation
EBITDA impact
100 companies/day entering the HubSpot pipeline automatically, replacing a manual research and entry process that could not grow with rep headcount
IP operationalised
Prospecting agent architecture fully owned by the client; Claude CoWork deployed with custom HubSpot integrations and company-specific frameworks

Tags

B2B manufacturing AI HubSpot automation Google ADK Agentic AI Sales prospecting automation Vertex AI

Without scaling the research team, without manual CRM entry, and without prototype tooling that required ongoing maintenance to keep running.

100 companies/day loaded into HubSpot · 45-60 min/day per rep recovered · Production deployed on Vertex AI Agent Engine · Claude CoWork installed with custom integrations

The bridge. Reps were spending the first hour of their day manually researching prospects, deciding what to prioritise, and entering data into HubSpot. Four weeks later, that work was done before anyone arrived. The agent runs overnight; the pipeline is waiting in the morning. That shift is the work.

The client is a B2B manufacturing marketplace based in Europe, connecting buyers of custom-manufactured industrial parts with production facilities across the region. The sales motion requires continuous outbound prospecting: identifying manufacturers and procurement teams who could benefit from the platform, researching company profiles, qualifying fit, and loading decision-maker contacts into HubSpot for the sales team to work from.

The bottleneck was not the sales team. It was everything that happened before the first conversation. Manual company research took 45 minutes to an hour per rep per day. Qualifying fit against the marketplace’s buyer and supplier profile required looking across multiple sources with no consistent process. HubSpot entry was manual, inconsistent, and a task reps resisted because it pulled time away from actual selling.

The question the marketplace brought to SeidrLab was whether this pipeline could run automatically, every day, without requiring a data operations team to keep it moving.


How we designed the prospecting agent around the actual research workflow

We started by mapping the research process reps were already doing manually: what sources they used, what they were looking for in each source, how they decided a company was worth adding to HubSpot, and what information they needed attached to each record before they could make first contact.

The design principle that came out of that mapping was straightforward: the agent should replicate the judgment a good rep applies during research, not just automate the data entry at the end. That meant the prospecting agent needed to assess fit, not just retrieve data. It needed to pull from multiple sources, apply qualification logic, and produce a record that a rep could act on immediately, without a follow-up research step before making contact.

We built the agent using Google Agent Development Kit (ADK) with a Python backend. The agent pulls company and contact data through a Cloudflare Workers remote MCP server, which provides a standardised interface to the external data sources the research workflow depends on. The qualification logic is encoded in the agent’s instructions: company size, industry fit, operational footprint, and signals that indicate procurement decision-making authority.

Key takeaway. An agent that automates data entry without encoding the qualification logic just fills the CRM with noise faster. The intelligence has to be in the agent’s judgment, not just its speed.


How we built the enrichment pipeline that loads 100 companies daily

The enrichment step was where the volume became meaningful. A rep researching manually can process a handful of companies in an hour. The agent loads 100 qualified records into HubSpot daily, with contact information, company profile, and qualification signals attached to each.

The pipeline runs on a daily schedule. The agent sources company targets against the marketplace’s buyer and supplier profile, runs each candidate through the qualification criteria, enriches each qualifying record with contact data and company signals, and writes the completed records directly into HubSpot via the CRM integration. Records that don’t meet the qualification threshold are not entered. The filter runs before the record is written, not after.

The HubSpot records arrive structured for immediate use: company name, industry, size, relevant signals, and decision-maker contact attached. Reps open HubSpot in the morning and see a queue of researched, qualified, contact-ready companies. The prioritisation work, deciding which records to engage today versus next week, takes minutes, not an hour.

Key takeaway. Volume and quality are both filter problems. The pipeline’s job is to deliver qualified records, not maximise the number of records. A rep who opens HubSpot to 100 researched, qualified companies works differently from a rep who opens it to 100 raw entries that still need assessment.


How we deployed the agent to production on Vertex AI Agent Engine

A working prototype is not a production system. We deployed the agent on Vertex AI Agent Engine, Google Cloud’s managed runtime for agentic workloads. This gave the agent a reliable execution environment with scheduling, logging, monitoring, and the ability to handle the daily run volume without manual intervention.

The Cloudflare Workers remote MCP server provides the agent’s tool interface. It abstracts the external data sources behind a consistent API that the agent calls through the MCP protocol. This architecture separates the agent logic from the data access layer, which means data sources can be updated or extended without rewriting the agent itself. Adding a new research source is a change to the MCP server, not to the agent.

The client owns the full system: the agent code, the MCP server, the GCP project, and the HubSpot integration. The system runs without SeidrLab involvement in the daily operation. Monitoring is handled through standard GCP observability tooling. The daily schedule, the qualification parameters, and the HubSpot field mapping are all configurable by the client’s team.

Key takeaway. Production deployment is not a deployment step added at the end. It is an architectural choice made at the beginning. A system designed for production from the start is fundamentally different from a prototype retrofitted to run reliably.


How we extended AI capability to the full team with Claude CoWork

Alongside the prospecting agent, we installed Claude CoWork across the team with custom plugins and frameworks tailored to the marketplace’s workflows. Where the agent handles the prospecting pipeline autonomously, Claude CoWork gives the team AI-assisted capability at the point of individual work: writing, analysis, CRM interactions, and operational tasks that benefit from AI assistance but require human judgment and direction.

The HubSpot integration in Claude CoWork lets reps pull contact and company data directly into their Claude workspace, draft outreach against the enriched record, and push updates back to HubSpot without leaving the conversation interface. The custom frameworks provide company-specific context: product knowledge, buyer personas, common objections, and messaging guidelines. The outputs are calibrated to the marketplace’s sales motion rather than generic AI output.

The combination means the marketplace’s AI capability now covers both ends of the sales workflow: the upstream pipeline work that runs without rep involvement, and the downstream engagement work that reps lead with AI assistance.

Key takeaway. Prospecting automation and AI-assisted execution are two different problems that require two different tools. Building both, the autonomous pipeline and the human-directed assistant, closes the gap between a full lead queue and a higher-performing sales team.


What changed

Before. Reps spending 45-60 minutes per day on manual company research, qualification, and HubSpot data entry before any selling work began. Research quality was inconsistent across the team. The pipeline volume was capped by rep research capacity, not by market size. Staff using general AI tools without integrations or context, producing outputs that required significant editing to be usable.

After. One hundred researched, qualified, contact-ready companies enter HubSpot daily through an agent that runs overnight on Vertex AI Agent Engine. Reps arrive to a prioritised queue. Claude CoWork with custom HubSpot integrations and company-specific frameworks is installed and in use across the team. The research operation that previously relied on rep time to run now runs independently of rep availability.


Does this fit your situation?

If your sales team’s research and CRM entry work is growing with headcount rather than with tooling, and you want a production system that runs overnight without ongoing maintenance: that is the build we do in six weeks.

Book a discovery call →