Collecting my attempts to improve at tech, art, and life

Making A Mojo Link Checker

Tags: perl site mojolicious programming

I wrote a Perl script using utility features in Mojolicious to check all of the links in my site.

Nothing lasts forever. Sites get reorganized, move, or disappear. As my own site has gotten older — some of these pages are over fifteen years old — links from old posts stop working. link rot is a fact of life on the Internet. I want to minimize it here.

Instead of manually checking each of the 245 posts on this site, I chose to write some code that identifies the dead end links. Then I could manually adjust the bad links. Yay! That’s hand-crafted automation there.

use Mojo!

Mojolicious is a Perl framework for making Web applications. It also happens to provide excellent support for a wide range of Web-related programming.

I mentioned Mojolicious here before. I use it as a part of my daily dev toolkit, even though I still haven’t made a real Web app with it.

The code

I could just dump the script here and go on with my day, but I feel like typing a lot for some reason. Let’s go through the major chunks of the code.

The setup

use 5.24.0;
use warnings;
use experimental 'signatures';

Whenever possible, I specify the latest version of Perl (currently 5.24. It enables some features and deprecates others. If nothing else, it reminds me when I last worked on the code. Recent Perl versions automatically enable strict, but it’s useful for me to also turn on warnings.

The experimental CPAN module saves some boiler plate when using Perl features that have not fully stabilized — such as function signatures.

use Mojo::DOM;
use Mojo::File;
use Mojo::JSON qw(decode_json);
use Mojo::URL;
use Mojo::UserAgent;

Mojolicious provides a remarkable amount of functionality for such a small installation. This is just what I’m explicitly using.

HTML/XML DOM parser that supports CSS Selectors
for handling filepaths and easy reading / writing files.
decode_json lets me turn the Hugo config.json file into a Perl structure.
understands the components of Uniform Resource Locators
makes HTTP and WebSocket requests (similar to LWP::UserAgent, or Requests for Python people)

From the top

my $config_file   = "config.json";
my $config        = decode_json(Mojo::File->new($config_file)->slurp);
my $site          = Mojo::URL->new($config->{BaseURL});
my $root          = $config->{publishDir} || 'public';
my $checked_links = {};
my $ua            = Mojo::UserAgent->new;
$ua->max_redirects( 5 ); # some sites love lots of redirects

my $test_file = shift @ARGV // '';

if ( $test_file ) {
  check_links_in( $test_file, $ua );
else {
  my $path = Mojo::File->new( $root );
  my $files = $path->list_tree->grep( qr{ \. (?:html|xml )$ }x );

  $files->each( sub { check_links_in($_); } );

This is the important bit: load the config, create a user agent, and check links in one or all of the generated HTML files. I checked the generated HTML files in public because I didn’t feel like messing with hugo server or a Mojolicious mini-app. Scraping a local server could be an option later.

Using Mojolicious for everything was so much fun that I rewrote config.yaml as config.json to allow using Mojo::JSON here. Hugo’s built-in support for different configuration format made that a painless shift. Then Mojo lets me slurp the contents of the config file into a single string, which decode_json turns into a hash reference.

list_tree gives a recursive directory listing of everything under $root as a Mojo::Collection. Collections provide a tidy toolkit of list handling functionality without requiring me to go back and forth between arrays and array references. I could find and iterate over all the HTML and XML files in vanilla Perl 5, but I like this better.

After a few runs, I added the ability to specify a single file in @ARGV. That way I can figure things out when that one link in that one file causes trouble.

sub check_links_in($filename) {
  my $html = Mojo::File->new( $filename )->slurp;
  my $dom = Mojo::DOM->new( $html );
  my $links = $dom->find( '[href], [src]' );

  $links->each( sub($link, $n) {
    my $target = $link->attr( "href" ) || $link->attr( "src" );

    # Assume status will not change during the same run.
    return if exists $checked_links->{ $target };

    $checked_links->{ $target } = 1;
    my $url = Mojo::URL->new( $target );

    # Ignore email links
    return if $url->scheme && $url->scheme eq 'mailto';

    $checked_links->{ $target } = file_exists_for( $url )
      // external_link_works_for( $url )
      // 0;

    # In this version we only care about invalid links.
    unless ( $checked_links->{ $target } ) { say summary_for( $target, $filename ); }

Once again I slurp a file into a string. This time it gets handed off to Mojo::DOM so it can find any elements with src or href attributes, and then create a Mojo::URL from the appropriate attr. Mojo::URL does the tedious work of parsing URLs and making components like scheme available.

Leaning on the // defined-or logical shortcut lets me take advantage of the three boolean states of Perl: truthy, falsey, and “I dunno.” Each URL-testing subroutine can return undef to indicate that it doesn’t know what to do with the URL, and let the next subroutine in line handle it. If nobody knows what to do with it, then that’s a bad link and gets remembered as a falsey value.

each hands two items to the subroutine it invokes: an item in the collection and what number in the collection that item is (starting from 1). No, I don’t use $n, but I wanted you to see that it’s available. You can also access the item as $_ as I did earlier. You can even do your subroutine arguments the old fashioned way with @_.

sub file_exists_for($url) {
  # Ignore full urls that aren't pointed at my site.
  if ( $url->host && $url->host ne $site->host ) {

  if ( $url->fragment && $url->path eq '') {
    # Points to a URL fragment within itself
    # Today I don't care about those.
    # If I did, I'd remember what file $url came from, load it, and check the DOM.
    return 1;

  my $path = $url->path
    or return;

  if ( $path eq '/' || $path->trailing_slash ) {
    $path = $path->merge("index.html")

  my $file = $root . $path;
  return -f $file;

I would check for ../ abuse if this was a general purpose script, but it’s mostly links I added by hand and checked manually at some point in the last fifteen years. So - assuming past me was not acting maliciously or foolishly, we rule out more likely situations:

Mojo::Path manipulation delights me. Sure, this could be a regular expression substitution with fewer characters of code, but someone else seeing merge after a check for a trailing slash would probably understand that I’m adjusting for the common practice of /thing/ being a link to /thing/index.html. They might understand even if they weren’t Perl developers!

sub external_link_works_for($url, $ua) {
  my $response;

  # Ignore tutorial demo links
  return 1
    if $url->host && $url->host eq 'localhost';

  # Ex: //
  # Produced by some Hugo shortcodes.
  my $is_protocol_relative = !$url->scheme && $url->host && $url->host ne $site->host;

  if ( $is_protocol_relative ) {
    # Use my site's choice of HTTP / HTTPS
    $url->scheme( $site->scheme );

  eval {
    $response = $ua->head( $url )->result;

  if ( $@ ) {
    warn "When checking $url: $@";

  return $response->is_success;

After some quick checks to ensure I’m not looking at a blog demo link and that I handle protocol-relative URLs correctly, I wrap a simple head request in an eval block.

I use HTTP HEAD because I only care about whether the link is valid. I don’t want the full content at the link. eval lets me catch timeouts and requests being sent to Web sites which no longer exist. Assuming no errors, this eventually returns whether the result of the HTTP transaction succeeded with is_success.

Summarize it

sub summary_for($target, $filename) {
  die "Didn't check [$target]?"
    unless exists $checked_links->{ $target };

  my $status = $checked_links->{ $target }
    ? "+"  # It worked!
    : "-"  # Something went wrong.
  return "$status $filename $target";

Today I only looked for bad links, but it can be useful to know the status of all links in my site. I used it a few times during development. May as well leave that bit of logic in.

What’s That Do?

$ ./scripts/link-checker > links.txt

A couple hundred lines like this, basically.

When checking Premature connection close at scripts/ line 53.
- public/categories/blogspot/index.html
- public/categories/blogspot/index.html
When checking Can't connect: Name or service not known at scripts/ line 53.
- public/categories/blogspot/index.html
When checking Connect timeout at scripts/ line 53.
- public/categories/blogspot/index.html
- public/categories/blogspot/index.html
When checking Connect timeout at scripts/ line 53.
- public/categories/coolnamehere/index.html

Goodness those are embarrassing.

Okay I’m gonna go fix this.

Some links just won’t work with this code. I may revisit this later, but I got what I need. All links should at least work in a browser for now.

An added bonus that I didn’t expect: this code also ran on Windows 10 with no changes needed.

More Ideas

Improvements that I thought of while putting this together, which I may eventually try out.

Honestly the script does what I need it to, and I might never implement these other ideas.

Got a comment? A question? More of a comment than a question?

Talk to me about this page on:

Added to vault 2024-01-15. Updated on 2024-02-02