Lead scoring is one of the most discussed and least well-executed practices in marketing operations. Every CRM has a scoring feature. Every marketing team knows they should use it. And yet, most lead scoring models I've audited share the same problem: they were set up once, based on assumptions, and never validated against actual conversion data.
Here's a framework I've refined over nine years and dozens of implementations. It's not theoretical — it's built on what I've seen work across B2B companies with sales cycles ranging from two weeks to eighteen months.
Why most lead scoring fails
The typical lead scoring setup goes like this: marketing assigns points based on actions (opened an email: +5, visited pricing page: +10, downloaded a whitepaper: +15) and demographics (C-level title: +20, company size 100+: +15, target industry: +10). When a contact crosses a threshold — say, 50 points — they become an MQL and get passed to sales.
The problem? These point values are almost always arbitrary. Who decided that visiting the pricing page is worth exactly 10 points? Why is a C-level title worth more than a VP title? Usually, someone made reasonable guesses during setup and nobody revisited them.
The result: sales teams receive MQLs that don't convert, lose trust in marketing's leads, and eventually ignore the scores altogether. Sound familiar?
Separate fit scoring from engagement scoring
The first structural change that makes scoring reliable: split your score into two independent dimensions. Fit score measures who the lead is. Engagement score measures what they've done.
Fit score is based on firmographic and demographic data. Industry, company size, job title, geography, technology stack — the attributes that tell you whether this person matches your ideal customer profile. Fit scoring is relatively stable and doesn't require frequent adjustment.
Engagement score is based on behavior. Page visits, form submissions, email interactions, content downloads, webinar attendance. Engagement scoring needs regular tuning because behaviors change as your content and channels evolve.
A lead with high fit and low engagement is worth nurturing — they match your ICP but haven't shown buying intent yet. A lead with low fit and high engagement is probably researching but won't convert — a student writing a thesis, a competitor doing research, or someone at a company that's too small for your product.
Only leads with high fit AND high engagement should become MQLs. In my experience, this single structural change typically reduces MQL volume significantly while meaningfully increasing the MQL-to-SQL conversion rate. Of course, scoring is only as good as the data behind it — make sure you have a solid data quality foundation first.
Build your fit score from closed-won data
Instead of guessing which attributes matter, analyze your last 100 closed-won deals. What industries are most represented? What company sizes? What job titles were involved in the buying decision? What geography?
Build your fit scoring from this data. If 70% of your closed-won deals are SaaS companies with 50-200 employees, that profile gets the highest fit score. If only 5% come from manufacturing, that segment gets a low score — not because manufacturing is bad, but because the data shows your product resonates elsewhere.
Review and refresh this analysis every six months. Your customer profile evolves as your product and market change.
Score engagement by intent, not volume
Not all engagement signals are equal. Visiting your blog ten times shows interest. Visiting your pricing page once shows intent. Your scoring should reflect this difference.
I categorize engagement signals into three tiers:
High intent (25-50 points): Pricing page visit, demo request, contact form submission, free trial signup. These are buying signals. A lead who visits your pricing page and then views a case study is telling you they're evaluating.
Medium intent (5-15 points): Case study downloads, product page views, webinar attendance, multiple blog visits in a short window. These show active research, not just casual browsing.
Low intent (1-3 points): Email opens, single blog visits, social media engagement. These are awareness signals. They matter for nurturing but shouldn't drive MQL qualification on their own.
Add score decay. If a lead hasn't engaged in 30 days, start reducing their engagement score. A lead who was active three months ago but has gone silent isn't the same as one who's actively researching right now.
The MQL-to-SQL handoff
When a lead crosses your MQL threshold, the handoff to sales needs to be clean and fast. Here's what I build into every implementation:
Automatic assignment based on territory, company size, or round-robin. No leads sitting in a queue waiting for someone to claim them.
Context in the notification. The sales rep should receive the lead's fit score, engagement score, recent activities, and the specific action that triggered MQL status. "New MQL" means nothing. "VP of Marketing at a 200-person SaaS company who viewed pricing twice this week and downloaded the ROI calculator" is actionable. HubSpot's lifecycle stages make this handoff trackable.
SLA tracking. Define how quickly sales should follow up (I recommend 4 hours during business hours) and track compliance using HubSpot workflows. If MQLs are sitting untouched for 48 hours, your scoring model doesn't matter — the leads are going cold before anyone contacts them.
Feedback loop. Sales marks every MQL as accepted (becomes SQL) or rejected (with a reason). This data is gold. If 40% of MQLs are rejected because "not the right decision maker," your fit scoring needs work. If they're rejected because "not ready to buy," your engagement threshold might be too low.
Validate and iterate quarterly
Every quarter, pull your scoring data and answer these questions: What percentage of MQLs converted to SQLs? What's the average score of leads that became customers versus those that didn't? Are there high-scoring leads that never converted — and why?
Adjust your point values based on what the data tells you. If webinar attendance correlates strongly with conversion, increase its score. If email opens have no correlation with pipeline, reduce or remove them.
Lead scoring isn't a set-and-forget system. It's a model that gets better with data. The companies that treat it as an ongoing practice — reviewing, adjusting, and validating — are the ones where sales actually trusts marketing's leads. For the bigger picture on how scoring fits into your automation strategy, see my guide on marketing automation in 2026. And make sure the emails triggered by your scoring actually reach the inbox — email deliverability matters more than ever.
Want to build a lead scoring model that sales actually trusts?
Let's Talk