How BLT-Leaf Works

Understanding the scoring system, data sources, and known limitations

Overview

BLT-Leaf is an automated PR readiness checker that analyzes GitHub pull requests for operational readiness by examining:

  • CI Status: Whether automated tests and checks are passing
  • Review Health: Number of approvals, requested changes, and review freshness
  • Response Rate: How actively the author responds to feedback

Note: BLT-Leaf focuses on operational readiness, not code quality or security vulnerabilities. Use tools like SonarQube or Snyk for those aspects.

What BLT-Leaf Does NOT Analyze

BLT-Leaf focuses exclusively on operational readiness (CI status, review health, response rates). It does not currently evaluate:

  • Code correctness or logic bugs — No static analysis or bug detection
  • Performance regressions — No benchmarking or profiling
  • Security vulnerabilities — Use tools like Snyk or Dependabot
  • Code style or maintainability — Use linters like ESLint or Pylint
  • Test coverage quality — Use coverage tools like Codecov

Important: A PR with a high readiness score can still contain functional bugs, security issues, or poor code quality. Always perform manual code review.

Interactive Score Calculator

Adjust the sliders below to see how different factors affect the overall readiness score:

Unresolved review threads (-3 points each)

94%
✅ Merge Ready
CI Score: 100% High
Review Score: 100% High
Response Score: 80% Medium
Formula: (CI × 0.4) + (Review × 0.4) + (Response × 0.2) - (3 × Conversations)

Disclaimer: This is a simplified calculator. Actual scores may include additional factors like merge conflicts, draft status, and stale reviews. The conversations deduction (-3 points each) is applied after the base score calculation.

Known Edge Cases & Limitations

BLT-Leaf is not perfect. Here are scenarios where the analysis may be overconfident or inaccurate:

CI Checks

Scenario What BLT-Leaf Says Reality Confidence
Fork PR, 0 checks run ❌ "CI checks failing" Checks queued, awaiting maintainer approval 🔴 Low
3/5 checks passed, 2 pending ⚠️ "CI checks failing" Still running, not failed 🟡 Medium
All checks passed ✅ "CI checks passed" Accurate 🟢 High

Fix in progress: Distinguish between queued, running, and failed checks using GitHub check-runs API.

Review Health

Scenario What BLT-Leaf Says Reality Confidence
Bot requested changes ⚠️ "Changes requested" Bot auto-comment, not blocking 🟡 Medium
1 approval, draft PR ❌ "Draft mode" Intentional WIP 🟢 High
Stale review after typo fix ⚠️ "Stale review" Approval still valid 🟡 Medium

Fix in progress: Filter bot reviews and detect PR intent from description.

Response Rate

Scenario What BLT-Leaf Says Reality Confidence
10 "LGTM" comments, 0 responses ❌ "Low response rate (0%)" No response needed 🔴 Low
5 questions, 3 answered ⚠️ "Response rate: 60%" Accurate 🟢 High

Fix in progress: Use GraphQL review threads to detect resolved vs unresolved comments.

When to Ignore BLT-Leaf's Score

While BLT-Leaf provides useful operational insights, you should rely more on human judgment in these cases:

  • Major refactors or architectural changes

    Large-scale changes may need deeper review regardless of CI status

  • Large auto-generated files are present

    Package locks, generated code, or migration files may skew metrics

  • CI is waiting for manual approval

    Fork PRs or security-sensitive workflows may need maintainer approval to run

  • Project has custom review rules

    Some teams require specific approvers or domain expert reviews

  • Emergency hotfixes are needed

    Critical production fixes may bypass normal review processes

Bottom Line: In these cases, treat the score as informational only. Use your expertise and project context to make the final merge decision.

Understanding Confidence Levels

🟢

High Confidence

Trust the analysis — all required data is available and unambiguous.

Example: "5/5 CI checks passed"

🟡

Medium Confidence

Review manually — some data is missing or ambiguous.

Example: "Bot requested changes"

🔴

Low Confidence

Don't trust blindly — critical data is missing or this is a known false positive.

Example: "0 checks run (fork PR)"

How to Improve Your Readiness Score

If your PR has a low readiness score, here are specific actions you can take to improve it:

Issue How to Fix
CI checks failing Re-run workflows, fix failing tests, resolve linting errors
No approvals Request reviewers, ping team members, address feedback
Low response rate Reply to unresolved comments, push commits addressing feedback
Draft mode Mark PR as "Ready for review" when complete
Behind base branch Rebase or merge latest changes from base branch
Open conversations Resolve review threads by addressing comments or marking as resolved

Pro Tip: Use the "Update" button on each PR to refresh data and see your improvements reflected in real-time.

Understanding Rate Limits

BLT-Leaf uses the GitHub API to analyze PRs. GitHub enforces rate limits to prevent abuse:

Without GitHub Token

  • Limit: 60 requests/hour (shared across all users)
  • PR Analyses: ~12 per hour
  • Experience: May see "Rate limit exceeded" errors
  • Wait Time: Up to 60 minutes when limit hit

With GitHub Token

  • Limit: 5,000 requests/hour per user
  • PR Analyses: ~1,000 per hour
  • Experience: Smooth, no interruptions
  • Wait Time: Rare (only if you analyze 1000+ PRs/hour)

Recommendation: If you're analyzing multiple PRs or using BLT-Leaf frequently, add a GitHub Personal Access Token to avoid rate limit errors. Each user with a token gets their own 5,000 req/hour quota.

Data Freshness & Staleness

BLT-Leaf uses cached results for up to 10 minutes to reduce GitHub API usage and improve performance.

During this period, the following may not appear immediately:

  • New CI runs or check results
  • New reviews or approvals
  • New comments or resolved conversations
  • Changes to PR status (merged, closed, reopened)

Force Refresh: Use the "Update" button on each PR row to force re-analysis and bypass the cache. This fetches the latest data from GitHub immediately.

What Data Does BLT-Leaf Access?

BLT-Leaf is transparent about what data it accesses. Here are all the GitHub API endpoints used:

Data Point GitHub API Endpoint What We Extract
PR Metadata /repos/{owner}/{repo}/pulls/{number} Title, state, mergeable_state, draft status
CI Checks /repos/{owner}/{repo}/commits/{sha}/check-runs Status, conclusion, check names
Reviews /repos/{owner}/{repo}/pulls/{number}/reviews Approval state, reviewer, submitted date
Review Comments /repos/{owner}/{repo}/pulls/{number}/comments Comment body, author, line numbers
Issue Comments /repos/{owner}/{repo}/issues/{number}/comments General discussion, feedback
Timeline /repos/{owner}/{repo}/issues/{number}/timeline Events, labels, assignments

API Usage: Each PR analysis uses approximately 5 API calls. BLT-Leaf caches results for 10 minutes to minimize redundant requests.

Privacy & Data Usage

BLT-Leaf only accesses public GitHub PR metadata and review information through the official GitHub API.

We do NOT:

  • Store source code contents or file diffs
  • Share data with third parties or external services
  • Analyze private repositories without explicit authorization
  • Modify or write to your repositories (read-only access)

Security: All analysis is performed using read-only GitHub API access. BLT-Leaf cannot make changes to your code, issues, or pull requests.

Troubleshooting Common Issues

Having trouble understanding your PR score? Here are solutions to common issues:

Why is my score 0% when everything looks fine?

Likely cause: CI checks are queued (common with fork PRs).

Solution: Look for the 🔴 Low confidence badge next to the CI score. If present, check GitHub to see if workflows need maintainer approval. This is a known limitation — BLT-Leaf treats "no checks run" as failure.

Why does it say "changes requested" when it's just a bot?

Likely cause: Bot filtering is not yet implemented.

Solution: Manually verify if the "changes requested" is from a bot (e.g., linter, formatter). If so, you can safely ignore it. Bot filtering is in progress.

My response rate is 0% but I don't need to respond

Likely cause: Comments are informational (e.g., "LGTM", "+1").

Solution: If all comments are approvals or acknowledgments, no response is needed. This is a known limitation — BLT-Leaf counts all comments equally. Look for the 🔴 Low confidence badge.

I'm getting "Rate limit exceeded" errors

Likely cause: You've analyzed more than 12 PRs in the past hour without a GitHub token.

Solution: Wait for the rate limit to reset (check the timer in the header), or add a GitHub Personal Access Token to get 5,000 requests/hour instead of 60.

The score changed but I didn't update the PR

Likely cause: Someone added a review, CI checks completed, or the cache expired.

Solution: This is normal! BLT-Leaf refreshes data every 10 minutes. The score reflects the current state of the PR on GitHub.

Pro Tip: Always check the confidence badges (🟢 High, 🟡 Medium, 🔴 Low) next to each score component. Low confidence means the data is incomplete or ambiguous — verify manually on GitHub.

Scoring Model Information

Current Scoring Model

v1.0

Last Updated

February 2026

Major changes to the scoring algorithm are documented in the release notes. Minor adjustments may occur without version changes.

Frequently Asked Questions

Why does my PR show "CI failing" when checks haven't started?

This is a known limitation. BLT-Leaf currently treats "no checks run" as failure. This commonly happens with fork PRs where workflows need maintainer approval. We're working on distinguishing queued vs failed checks.

Can I customize the scoring weights?

Not yet, but it's on the roadmap. Currently, CI and Review each account for 40% of the score, while Response Rate accounts for 20%.

Does BLT-Leaf check for security vulnerabilities?

No. BLT-Leaf only checks operational readiness (CI, reviews, response rate). Use tools like Snyk, Dependabot, or SonarQube for security analysis.

How often is data refreshed?

Data is cached for 10 minutes. You can manually refresh by clicking the "Refresh" button on the dashboard.

What GitHub API endpoints does BLT-Leaf use?

BLT-Leaf uses: /pulls/{number}, /commits/{sha}/check-runs, /pulls/{number}/reviews, /pulls/{number}/comments, and /issues/{number}/timeline. Each PR analysis uses approximately 5 API calls.

Want to Improve BLT-Leaf?

If you'd like to help improve scoring accuracy, fix known limitations, or add new features:

Check open issues

Browse known bugs and feature requests on GitHub

Review the analysis engine

Understand how scoring works under the hood

Propose new heuristics

Suggest better ways to detect PR readiness

Contribute code

Submit PRs to improve BLT-Leaf itself

Back to Dashboard