Skip to article content
Michael Pilgram, Systems Thinker & Technology Strategist
Michael Pilgram
Systems Thinker & Technology Strategist
21 min read

Why I Built an Audit Tool for Developers Who Care

The web development industry deserves better audit tools. GuardianScan helps developers ship quality work that actually helps their customers grow—catching performance, accessibility, and security issues in 45 seconds.

web-developmentperformanceaccessibilitystandardsguardianscan

There's a moment in every website audit where I open four different tools, cross-reference their outputs, and manually check another dozen things they all miss. It's tedious. And after doing this hundreds of times over the years, it started to feel less like diligence and more like a system failure.

But here's what really bothered me: every issue I caught late was an issue that could've been caught early. Every accessibility problem, every performance bottleneck, every broken piece of structured data—these weren't just technical problems. They were missed opportunities for my clients to serve their customers better, to grow faster, to compete more effectively.

The web development industry has incredible tools for writing code, but we're still auditing sites like it's 2020. We're checking Core Web Vitals, WCAG 2.2, AI search readiness, security headers—all separately, all manually, all taking 20-30 minutes per audit. And whilst we're doing that, we're not shipping quality work. We're not helping our customers grow.

I kept thinking: this shouldn't be this hard. Making quality accessible to developers shouldn't require a toolkit of six different services and manual cross-referencing.

So I built GuardianScan. Not to build another SaaS product, but to raise the floor on what "good enough" looks like. To make it genuinely easy for developers to ship quality work that helps their customers succeed.

The Real Cost of Poor Tooling

If you've read my post on pattern matching in decision making, you know our brains are wired to spot repetition. After years of audits, I started seeing the same preventable issues repeatedly:

  • Missing security headers that expose customers to risk
  • Images without explicit width/height causing layout shift (killing conversions)
  • Poor font loading strategies creating sluggish interfaces
  • Accessibility violations that lock out potential customers
  • Oversized JavaScript bundles slowing down mostly-static content
  • Heavy client-side rendering making simple pages feel slow

These aren't just technical problems. Each one costs real money and real opportunities:

  • Slow sites lose conversions
  • Poor accessibility excludes customers
  • Security issues destroy trust
  • Bad AI search optimisation means invisible to growing traffic sources

The pattern was clear: we have better frameworks than ever, but we're still using yesterday's audit tools to check today's work. And that gap is costing our customers growth.

Lighthouse is brilliant for what it does, but it's framework-agnostic by design. It won't flag that your Next.js site is shipping 300KB of JavaScript for a static blog post, or that your Schema.org markup is malformed. Accessibility checkers like WAVE and Axe catch maybe 30-40% of WCAG violations according to WebAIM research—the rest require manual testing. SEO tools like Screaming Frog focus on meta tags and sitemaps but miss Core Web Vitals entirely.

And nobody was checking all of this together comprehensively in under a minute. I know because I searched. Spent three weeks in late 2024 evaluating every audit tool I could find. They all either focused on legacy compatibility or missed modern requirements entirely. None of them checked for AI search engine readiness—which in 2025 is inexcusable.

The Gap Between Standards and Reality

Here's what changed between 2020 and 2025 that most audit tools haven't caught up with:

AI Search Engines Changed Everything

In 2020, Google was the dominant search engine. In 2025, ChatGPT search, Perplexity, and Google's AI Overviews account for a significant portion of traffic. But they don't crawl websites the same way.

AI search engines prioritise structured data. They need clean Schema.org markup, semantic HTML, clear content hierarchy, and direct answers to questions. A site optimised for traditional SEO might be invisible to AI crawlers.

Here's the reality: your customers are searching with AI tools now. ChatGPT, Perplexity, Google's AI overviews. If your site has malformed structured data or poorly marked-up content, you're invisible to these tools. Traditional SEO scores look great, but you're losing traffic—and your customers are losing opportunities to grow.

Most audit tools still don't check for this. They'll validate your meta description but won't tell you if your Schema.org markup is actually parseable by AI or if your content is structured for the way people search in 2025.

Core Web Vitals Evolved And Tools Didn't Notice

In March 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP). This wasn't a minor tweak—it changed how we measure responsiveness. FID only measured the delay before the first interaction started processing. INP measures the full cycle: input delay, processing time, and rendering delay.

The threshold is strict: under 200 milliseconds. Sites that felt "snappy enough" started failing. I kept seeing this pattern—interactions that seemed instant were actually taking 300-400ms when measured properly. Rankings dropped. Conversions suffered. Customers lost revenue.

Here's what matters: your customers' users expect instant responses. Every sluggish interaction is a potential lost sale, a frustrated user, a competitor's win. This isn't academic—it's money and trust.

GuardianScan measures Total Blocking Time (TBT), a lab metric that correlates strongly with INP in the field. Whilst TBT and INP aren't identical, TBT provides reliable indicators of responsiveness issues that affect user experience and rankings. You catch the problems before they cost your customers business.

WCAG Raised the Bar And Most Tools Didn't Notice

WCAG 2.2 added nine new success criteria, and they're not trivial:

  • Touch targets - 44×44 pixels minimum. Harder than it sounds on dense interfaces. I've seen countless sites with 32×32px icon buttons that fail WCAG 2.2, but their audit tools (still checking against 2.1) didn't flag it.
  • Forms can't require redundant entry - Breaks a lot of checkout flows. Asking users to enter their address twice? That's now a Level A violation.
  • Authentication must offer accessible alternatives - No more "remember this grid of images" CAPTCHA. You need password managers, biometric logins, or other accessible options.

Government websites need WCAG 2.1 Level AA compliance by April 2026, but here's what actually matters: accessibility isn't just compliance. It's about not excluding potential customers. Every barrier you remove opens your client's business to more people. That's growth. That's what we should be building for.

And here's the kicker: automated accessibility audits catch 30-40% of issues at best according to WebAIM's research. The rest—colour contrast in complex layouts, keyboard navigation logic, screen reader compatibility—require manual testing or tools that actually understand context.

Modern Frameworks and Performance Patterns

Modern frameworks like Next.js, Remix, and Astro have fundamentally changed how we build for the web. Framework defaults shifted—caching behaviours changed, rendering strategies evolved, bundle sizes ballooned if you weren't careful.

For teams keeping up with framework updates, this means apps that performed well might suddenly need optimisation—not because they're slower, but because framework defaults changed. What was automatic performance work now requires explicit configuration.

GuardianScan detects framework patterns and flags common issues: oversized JavaScript bundles for static content, missing image optimisation, inefficient font loading strategies. It won't tell you which framework you should use, but it will tell you if you're shipping 400KB of JavaScript for what appears to be a mostly static page.

Framework Complexity vs. Actual Needs

About 90% of new enterprise applications are now cloud-native, built with modern frameworks. These frameworks offer powerful features—server components, progressive enhancement, edge rendering—but they also make it easy to ship too much JavaScript.

I've audited dozens of sites where everything runs client-side—massive bundles, poor hydration performance, unnecessary rerenders—when half the functionality could be achieved with less JavaScript.

Example from July 2025: An e-commerce site had a 287KB JavaScript bundle for a product listing page with minimal interactivity. The bundle size alone suggested over-engineering. After optimisation, they cut it to 134KB and dropped their LCP from 4.1s to 2.3s.

GuardianScan flags when bundle sizes exceed sensible thresholds (300KB+), helping you spot potential over-engineering before it impacts users.

What I Learned Building the Solution

Building GuardianScan forced me to think deeply about what actually matters in a website audit. Not what's easy to measure, but what creates real value.

Comprehensive but Fast

The ideal audit would check everything that matters for modern websites. Every WCAG 2.2 criterion, every performance metric, every security header, SEO fundamentals, AI search readiness. Most tools either check a subset of these or take forever to run.

GuardianScan runs 50 automated checks covering all these areas.

I've written before about systems thinking and feedback loops. The value of feedback is inversely proportional to its latency. Feedback that takes five minutes is exponentially more useful than feedback that takes an hour, even if it's slightly less comprehensive.

So I optimised for 45 seconds for a comprehensive scan. Here's why that matters: you'll actually use it. You'll check before every deploy instead of "when you remember." You'll catch issues early when they're cheap to fix, not late when they're embarrassing and expensive. You'll ship quality work consistently, not occasionally.

This isn't about speed for speed's sake. It's about making quality the path of least resistance. When doing the right thing is easier than cutting corners, standards rise naturally.

This meant ruthless focus on modern standards:

  • Check everything that matters for 2025 web development
  • Skip legacy browser compatibility checks (if you're supporting IE11, this isn't your tool)
  • Focus on what actually affects users and search visibility
  • Provide actionable diagnostics, not vague scores

Actionable Insights Over Vanity Metrics

Lighthouse gives you a performance score out of 100. It's satisfying—people love scores—but it doesn't help you fix anything. A score of 78 doesn't tell you what's actually wrong or what to prioritise.

Here's what I believe: developers want to do good work. They want to ship fast, accessible, secure sites. They don't need judgment—they need clarity. They need to know exactly what's wrong, why it matters to their users, and how to fix it.

So GuardianScan surfaces insights, not scores. Instead of "Performance: 78/100," it tells you:

  • LCP is 3.2s (target: under 2.5s), caused by an unoptimised hero image at /images/hero.jpg
  • TBT is 340ms (target: under 200ms), likely from 147KB of client-side JavaScript
  • CLS is 0.15 (target: under 0.1), from images without width/height attributes

Same data, but actionable. You know exactly what to fix, why it matters to your users, and what impact it has on your client's business. Fix the hero image, LCP drops, page loads faster, conversions improve. That's value.

This is where pattern detection pays off. GuardianScan identifies when images aren't using modern formats (WebP, AVIF), when fonts are loading inefficiently, or when you're shipping oversized bundles. It focuses on what affects users—load times, visual stability, responsiveness—regardless of your tech stack.

Honest About Limitations, Serious About Impact

Here's an uncomfortable truth I encountered early: perfect accessibility auditing requires human judgement. Automated tools catch the mechanical stuff—missing alt text, insufficient colour contrast, invalid ARIA—but they can't evaluate whether your alt text is meaningful, or whether your keyboard navigation flow makes logical sense.

WebAIM's research consistently shows automated tools catch 30-40% of WCAG issues. I aimed higher. GuardianScan catches about 70% by combining standard automated checks with smarter pattern detection (like finding navigation links missing accessible names, or detecting keyboard focus traps in modals).

But here's what matters: that 70% is the stuff you can fix right now, today, without specialised training. Missing alt text. Poor colour contrast. Invalid ARIA. Touch targets that are too small. These are objectively measurable, clearly fixable issues that—when you fix them—open your client's site to more users immediately.

I'd rather be honest about what the tool does than overpromise. GuardianScan catches the 70% of accessibility issues that are objectively testable, clearly documents what it checks, and reminds you that manual testing is still necessary for full WCAG 2.2 compliance.

But that 70%? That's already more than most sites do. If every developer fixed that 70%, the web would be dramatically more accessible. That's raising standards. That's what we're here for.

Modern Standards as a Feature

Most audit tools are conservative by nature. They check against established standards because those are well-documented and legally defensible. But "established standards" often means "what was true two years ago."

I decided to position GuardianScan around comprehensive modern standards—not as an early adopter gimmick, but because that's what actually matters if you're building websites in 2025:

  • Measure responsiveness properly - Track Total Blocking Time, a reliable indicator of user experience. Sites need responsive interactions—sluggish interfaces lose users.
  • Target WCAG 2.2 Level AA - Government sites need WCAG 2.1 by April 2026, but 2.2 is where compliance is heading. The nine new criteria (44×44px touch targets, redundant entry prevention, accessible authentication) aren't optional.
  • Validate AI search readiness - Check Schema.org markup quality, content structure for AI parsing, FAQ schema, article schema, breadcrumb navigation. If AI search engines can't parse your content, you're invisible to growing traffic sources.
  • Flag bundle size issues - Identify when you're shipping oversized JavaScript bundles. A mostly-static page shouldn't need 300KB+ of JavaScript to render.
  • Comprehensive security checks - HTTPS enforcement, CSP headers, HSTS, X-Frame-Options, mixed content detection, and other headers that protect users.

This creates a clear tradeoff: if you're maintaining legacy systems or supporting old browsers, GuardianScan isn't for you. But if you're building modern web applications for 2025 and beyond, it checks everything comprehensively.

The Product as System Design

Building GuardianScan reinforced something I've learned over fifteen years: products are systems, and systems thinking applies to how you design them.

The temptation is to add features. More checks, more frameworks, more integrations, more customisation. Every feature feels valuable in isolation. But features interact—they create complexity, cognitive load, maintenance burden.

I kept asking: what's the simplest system that delivers the core value? And the core value isn't "comprehensive auditing"—it's "quickly identify the issues that actually matter."

So GuardianScan does 50 comprehensive checks across six categories:

  1. Modern Standards (13 checks covering image optimisation, lazy loading, font strategies, code splitting, caching)
  2. Performance (5 checks including LCP, CLS, Total Blocking Time, First Contentful Paint, Speed Index)
  3. SEO & AI Search (8 checks covering meta tags, Schema.org markup, sitemaps, structured data for AI readability)
  4. Security (8 checks including HTTPS, CSP headers, HSTS, X-Frame-Options, mixed content detection)
  5. Accessibility (12 checks for WCAG 2.2 Level AA compliance, including keyboard navigation, screen readers, touch targets)
  6. Code Quality (4 checks covering console errors, page size, HTTP requests, resource efficiency)

Not 300 legacy compatibility checks. Not customisable rulesets. Just comprehensive coverage of what actually matters for modern web development in 2025.

This is what I call "productive chaos" from my earlier post—clear boundaries, flexible execution. The boundaries are non-negotiable (these specific checks, this performance target, this accuracy threshold). The execution is optimised for speed and clarity.

Who This Is Actually For

I built GuardianScan for myself. That sounds selfish, but it's actually the most honest product development approach I know: solve a problem you have, that you understand deeply, that you can evaluate without pretense.

But "myself" is a proxy for everyone who genuinely cares about doing good work and helping their customers succeed:

Agencies who want to deliver exceptional value. You're not just building websites—you're helping your clients grow their businesses. Every performance issue you catch saves them conversions. Every accessibility barrier you remove opens them to more customers. Every security hole you fix protects their reputation. You need tools that help you consistently deliver work that drives real business results, not just looks pretty. You need to catch issues before launch because you care about your clients' success, not just because you're afraid of looking incompetent.

Freelancers who take pride in their craft. You inherit a project or start fresh, and you want to do it right. You want to ship work that's fast, accessible, and secure. Work that helps your clients compete and grow. You need to identify what matters—the issues that actually impact users and business metrics—and fix those first. You need data to back up your recommendations because you respect your clients' investment and want to deliver measurable value.

In-house teams who ship with confidence. You're responsible for your company's web presence. Every deploy is an opportunity to improve or a risk to degrade. Did that last update break accessibility? Did bundle size creep up? Is the site still fast on mobile? You need to know immediately because your users and your business depend on it.

All three groups share something fundamental: they care. They want to do good work. They want to help their customers succeed. They just need tools that make quality achievable, not aspirational.

What I'm Not Solving

Here's what GuardianScan doesn't do:

It doesn't replace manual accessibility testing. If you need full WCAG compliance, you still need human evaluators. GuardianScan catches the automated 70%, which is valuable, but it's not certification.

It doesn't do ongoing monitoring. It's a point-in-time audit, not a monitoring service like Sentry or LogRocket. You run it when you need it—before launch, after deploys, when something feels wrong.

It doesn't customise to your exact standards. Some tools let you configure thresholds, disable checks, add custom rules. GuardianScan is opinionated: these are the modern standards, these are the thresholds that matter. If you disagree, it's probably not the right tool for you.

It doesn't audit authenticated content. It checks public pages. If your app requires login, GuardianScan can't reach those screens. This is a genuine limitation I might address later, but for now, it's out of scope.

It can't analyse your source code. GuardianScan scans the live site—what gets delivered to browsers. For deeper insights like checking individual component usage, analysing unused dependencies, or reviewing build configurations, you'd need source code access. Future integrations (like build plugins or GitHub apps) could add this, but the current focus is on what's detectable from the live site: performance, security headers, accessibility, and SEO.

Being clear about limitations isn't weakness—it's honesty. Every tool makes tradeoffs. I'd rather be upfront about mine than overpromise.

What This Actually Means for You and Your Customers

If you're still manually checking sites across four different tools, you're spending 20-30 minutes per audit. That's not just your time—that's time you're not shipping features, not serving customers, not growing their business.

But here's what really matters: every issue you catch early is an opportunity you protect for your customer. Every performance problem you fix before launch is conversions you save. Every accessibility barrier you remove is customers you welcome. Every security header you implement is trust you build.

For your customers' users: Faster sites that work for everyone. Sites that show up in AI search results. Sites that feel responsive and modern. Sites that don't exclude people with disabilities. This is what we should be building.

For your customers' businesses: Better conversion rates from better performance. More traffic from better SEO and AI search visibility. Wider audience reach from better accessibility. Stronger trust from better security. Faster growth from doing the fundamentals right.

For you as a developer: Ship work you're proud of. Catch issues before they're embarrassing. Justify your recommendations with clear data. Deliver measurable value to your clients. Spend your time building, not auditing.

That's what GuardianScan is for. Not vanity metrics. Not compliance theatre. Real value for real customers trying to grow real businesses.

Launch and Pricing

You can sign up now at guardianscan.ai to get notified and access your free scan. Three tiers are available:

Free tier: One complimentary scan to try the tool—see exactly what you're getting before committing. Available now when you sign up.

Professional: £24/month for unlimited scans, monitoring, and history tracking. That's intentionally positioned as "cheaper than an hour of your time." If it saves you thirty minutes on one audit, it's already paid for itself. Launches November 1st, 2025.

Agency: £79/month with team collaboration features and API access for integrating into your workflow. Coming soon after Pro launch.

Look, I'm not trying to build a SaaS empire or raise venture capital. I genuinely believe the web development industry needs higher standards, and those standards need to be accessible to everyone—not just teams with enterprise budgets and dedicated QA departments.

If a solo freelancer can run GuardianScan before shipping a client site and catch 50 potential issues in 45 seconds, they can compete with agencies ten times their size. If a small agency can consistently deliver fast, accessible, secure sites, they can win clients from competitors who ship sloppy work. If in-house teams can catch regressions before they reach production, they can ship with confidence instead of anxiety.

The Professional tier launches November 1st—just one week away. The subscription model supports ongoing development and server costs, whilst the free tier lets you genuinely try before committing—no credit card, no pressure.

If it helps a hundred developers ship better work and deliver more value to their customers, that's success. If it helps raise the floor on what "good enough" looks like across the industry, that's everything I hoped for.

What I'd Do Differently

Building GuardianScan taught me things I wish I'd known earlier:

Start with the positioning, not the features. I spent weeks debating whether to include X or Y check before realising the real question was: who is this for, and what problem does it solve for them? Once I answered that, feature decisions became obvious.

Opinionated tools are easier to build and more valuable. Every time I considered adding customisation—"let users configure thresholds" or "support custom rulesets"—I created complexity that didn't serve the core value. The best tools have a point of view.

Speed is a feature, not a constraint. I could have built a more comprehensive audit that takes five minutes. But optimising for 45 seconds forced me to prioritise ruthlessly, which made the tool better. Constraints drive clarity.

Modern standards are a competitive moat. Most tools optimise for compatibility with legacy systems. Building comprehensive checks for modern standards (WCAG 2.2, Core Web Vitals, security headers) and emerging requirements (AI search optimisation, modern image formats) makes you less universally applicable but far more valuable to the people who matter—developers building new things, not maintaining legacy systems that should have been retired years ago.

Raising the Floor, Not Just the Ceiling

After building GuardianScan, I've been thinking about what "raising standards" actually means. It's not about the exceptional sites—the teams with unlimited budgets and dedicated performance engineers are already shipping excellent work. They have the resources and expertise.

It's about raising the floor. It's about making "good enough" actually good. It's about making quality accessible to the freelancer building their third website, the small agency juggling ten clients, the in-house developer who inherited a codebase and just wants to make it better.

The web development industry has incredible potential. We have frameworks that make complex interactions trivial. We have deployment tools that make shipping instant. We have browsers that are faster and more capable than ever. But if we're still shipping slow, inaccessible, insecure sites because our audit tools are five years out of date, we're not living up to that potential.

Every site that's needlessly slow costs somebody business. Every site that excludes disabled users costs somebody opportunity. Every site with poor AI search optimisation costs somebody growth. These aren't abstract problems—they're real costs to real businesses trying to serve real customers.

GuardianScan is my attempt to make quality more achievable. To make "did we ship something good?" answerable in 45 seconds instead of 45 minutes. To help developers who care about their work deliver value to customers who are trying to grow.

If that resonates with you—if you care about doing good work and helping your customers succeed—give it a try. The free scan shows you exactly what you're getting. No tricks, no pressure, no BS. Just an honest tool trying to help raise the standards of an industry I genuinely care about.


The Professional tier launches November 1st, 2025. Sign up now at guardianscan.ai to get notified and access your free scan. See exactly what issues your site has—performance, accessibility, security, SEO, AI search readiness—in 45 seconds. No credit card required, no pressure. Just an honest audit that helps you ship better work and deliver real value to your customers.

Michael Pilgram is the founder of Numen Technology, a UK web development company. He's spent over fifteen years building websites and helping clients grow their businesses through better web development. He writes about raising standards, shipping quality work, and making the web better at michaelpilgram.co.uk.

Share this article:

Get New Posts in Your Inbox

Subscribe to receive new blog posts directly to your email. No spam, unsubscribe anytime.

Enjoyed this article?

If you found this helpful, consider buying me a coffee. It helps me create more content like this!

Buy Me a Coffee

Get posts like this delivered to your feed reader:

Subscribe via RSS