Back to Research

Lead Sourcing with Agents

Use an agent to turn public signals into a prospect list and draft outreach for human review.

Editorial illustration for Lead Sourcing with Agents. A lot of agent demos stop at code.
Rogier MullerApril 14, 20265 min read

A lot of agent demos stop at code. This one goes a step further: use an agent to turn a public signal stream into a prospect list, then draft outreach from that list. The source idea is simple. If your ideal customer is already commenting on relevant posts, an agent can help find those comments, extract the people behind them, and prepare a first-pass email workflow.

That is not fully automated sales. It is a narrow research assistant with one job: collect, filter, and structure public signals faster than a human can do by hand.

What the workflow does

The useful pattern is not “let the model browse the internet and sell.” The useful pattern is:

  • define a narrow audience signal
  • collect public posts or comments from a source you can access legitimately
  • extract names, handles, company context, and the reason they look relevant
  • score or bucket the results
  • draft a short outreach note for human review

In the source example, the signal is comments on LinkedIn posts. The same pattern can apply to other public surfaces: forum replies, GitHub issue threads, community posts, conference speaker lists, or newsletter comments. The source changes. The workflow stays similar.

Why agents help

This task is tedious in a way agents handle well. A human can do it, but the work is repetitive: open page, scan comments, copy names, check context, repeat. An agent can compress that loop.

The main gain is throughput with structure. If the agent is constrained well, it can turn a messy feed into a table of candidates in minutes instead of hours.

That matters most when the target list is small and specific. Broad lead gen is still noisy. Narrow signal-based sourcing is where the workflow starts to pay off.

A practical implementation pattern

Start with a single source and a single output format. Do not begin with “find all prospects everywhere.” Begin with one feed and one reviewable artifact.

A workable setup looks like this:

  1. Define the signal. Example: people commenting on posts about a specific problem, tool, or workflow.
  2. Capture the raw items. Use a browser agent, scraper, or manual export, depending on the site’s rules and your access.
  3. Extract fields. Name, profile link, company, role, comment text, and a short relevance note.
  4. Filter aggressively. Keep only people who match the ICP and show a real use case or pain point.
  5. Draft outreach. Ask the model for a short email based on the public signal, not a generic pitch.
  6. Review before sending. A human should check tone, accuracy, and whether the signal actually justifies contact.

If you want this to hold up in practice, keep the output schema boring. A CSV or markdown table is enough. Fancy agent memory is not the point.

Where this breaks

There are real limits.

The first is data quality. Public comments are noisy. People joke, agree, or react without buying intent. An agent can surface false positives quickly if your filter is weak.

The second is platform fragility. Comment layouts, access rules, and rate limits change. Any workflow that depends on a single site’s UI will need maintenance.

The third is compliance and trust. Public does not mean free-for-all. Teams still need to respect platform terms, privacy expectations, and internal policy. If the workflow crosses into scraping or outreach at scale, legal and ops review matter.

The fourth is message quality. A model can draft a plausible email from a public comment, but plausibility is not the same as relevance. If the note feels automated, response rates will suffer.

What makes it better

A few constraints improve the workflow more than a bigger model.

  • Use a narrow ICP.
  • Limit the source to one or two channels.
  • Require the agent to quote the exact public signal it used.
  • Add a rejection reason field so bad leads are visible.
  • Keep a human approval step before outreach.

That last point matters. The best use of the agent is often not sending email. It is reducing the time spent deciding who deserves a manual follow-up.

A simple review loop

Treat this like any other agentic workflow: build, test, review.

A small test set helps. Pick 20 public comments and see whether the agent can correctly identify the 5 that actually fit your ICP. If it cannot do that reliably, do not expand the workflow yet.

This is where a short review step matters. Check whether the extracted signal is specific enough to justify outreach, not just whether the model produced a neat list.

Bottom line

The durable pattern is not “AI does sales.” It is “an agent turns public noise into a smaller, reviewable queue.” That is a real productivity gain if the source is narrow, the schema is strict, and a human still owns the final send.

If you are building agentic tooling for teams, this is a good example of where the value sits: not in replacing judgment, but in compressing the search and sorting work that happens before judgment.

Related research

Ready to start?

Transform how your team builds software today.

Get in touch