Skip to content

24 ways to impress your friends

Responsive Enhancement

24 ways has been going strong for ten years. That’s an aeon in internet timescales. Just think of all the changes we’ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably changed landscape of front-end tooling.

Tools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement.

Progressive enhancement isn’t a technology. It’s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way.

Once you’ve figured out what the core functionality is – adding an item to a shopping cart, posting a message, sharing a photo – then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you have the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers.

The advantage of working this way isn’t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, it won’t be catastrophic.

There’s a common misconception that progressive enhancement means that you’ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn’t take very long at all. And once you’ve done that, you’re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren’t universally supported yet, that’s OK: you’ve already got your fallback in place.

The key to thinking about web development this way is realising that there isn’t one final interface – there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that’s OK. Websites do not need to look the same in every browser.

Once you truly accept that, it’s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you build works everywhere, while providing the best possible experience for more capable browsers.

Allow me to demonstrate with a simple example: navigation.

Step one: core functionality

Let’s say we have a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear:

  1. To read about any particular day.
  2. To browse from day to day.

The first is easily satisfied by marking up the text with headings, paragraphs and all the usual structural HTML elements. The second is satisfied by providing a list of good ol’ hyperlinks.

Now where’s the best place to position this navigation list? Personally, I’m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there’s a link with an href attribute pointing to the fragment identifier for the navigation.

<body>
  <main role="main" id="top">
    <a href="#menu" class="control">Menu</a>
    ...
  </main>
  <nav role="navigation" id="menu">
    ...
    <a href="#top" class="control">Dismiss</a>
  </nav>
</body>

See the footer-anchor pattern in action.

Because it’s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name).

Step two: layout as an enhancement

The footer-anchor pattern is a particularly neat solution on small-screen devices, like mobile phones. Once more screen real estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute, flexbox or, in this case, display: table.

@media all and (min-width: 35em) {
  .control {
    display: none;
  }
  body {
    display: table;
  }
  [role="navigation"] {
    display: table-caption;
    columns: 6 15em;
  }
}

See the styles for wider screens in action

Step three: enhance!

Right. At this point I’m providing core functionality to everyone, and I’ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don’t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers – the fallback is already in place.

What I’d really like is to provide a swish off-canvas pattern for small-screen devices. Here’s my plan:

  1. Position the navigation under the main content.
  2. Listen out for the .control links being activated and intercept that action.
  3. When those links are activated, toggle a class of .active on the body.
  4. If the .active class exists, slide the content out to reveal the navigation.

Here’s the CSS for positioning the content and navigation:

@media all and (max-width: 35em) {
  [role="main"] {
    transition: all .25s;
    width: 100%;
    position: absolute;
    z-index: 2;
    top: 0;
    right: 0;
  }
  [role="navigation"] {
    width: 75%;
    position: absolute;
    z-index: 1;
    top: 0;
    right: 0;
  }
  .active [role="main"] {
    transform: translateX(-75%);
  }
}

In my JavaScript, I’m going to listen out for any clicks on the .control links and toggle the .active class on the body accordingly:

(function (win, doc) {
  'use strict';
  var linkclass = 'control',
    activeclass = 'active',
    toggleClassName = function (element, toggleClass) {
      var reg = new RegExp('(s|^)' + toggleClass + '(s|$)');
      if (!element.className.match(reg)) {
        element.className += ' ' + toggleClass;
      } else {
        element.className = element.className.replace(reg, '');
      }
    },
    navListener = function (ev) {
      ev = ev || win.event;
      var target = ev.target || ev.srcElement;
      if (target.className.indexOf(linkclass) !== -1) {
        ev.preventDefault();
        toggleClassName(doc.body, activeclass);
      }
    };
  doc.addEventListener('click', navListener, false);
}(this, this.document));

I’m all set, right? Not so fast!

Cutting the mustard

I’ve made the assumption that addEventListener will be available in my JavaScript. That isn’t a safe assumption. That’s because JavaScript – unlike HTML or CSS – isn’t fault-tolerant. If you use an HTML element or attribute that a browser doesn’t understand, or if you use a CSS selector, property or value that a browser doesn’t understand, it’s no big deal. The browser will just ignore what it doesn’t understand: it won’t throw an error, and it won’t stop parsing the file.

JavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn’t recognise, that browser will throw an error, and it will stop parsing the file. That’s why it’s important to test for features before using them in JavaScript. That’s also why it isn’t safe to rely on JavaScript for core functionality.

In my case, I need to test for the existence of addEventListener:

(function (win, doc) {
  if (!win.addEventListener) {
    return;
  }
  ...
}(this, this.document));

The good folk over at the BBC call this kind of feature test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn’t cut the mustard, it doesn’t get the enhancements. And that’s fine because, remember, websites don’t need to look the same in every browser.

I want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I’m going to use JavaScript to add a class of .cutsthemustard to the document:

(function (win, doc) {
  if (!win.addEventListener) {
    return;
  }
  ...
  var enhanceclass = 'cutsthemustard';
  doc.documentElement.className += ' ' + enhanceclass;
}(this, this.document));

Now I can use the existence of that class name to adjust my CSS:

@media all and (max-width: 35em) {
  .cutsthemustard [role="main"] {
    transition: all .25s;
    width: 100%;
    position: absolute;
    z-index: 2;
    top: 0;
    right: 0;
  }
  .cutsthemustard [role="navigation"] {
    width: 75%;
    position: absolute;
    z-index: 1;
    top: 0;
    right: 0;
  }
  .cutsthemustard .active [role="main"] {
    transform: translateX(-75%);
  }
}

See the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screens so you might have to squish your browser window.

Enhance all the things!

This was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you’re providing the core functionality to everyone, you’re free to go crazy with all the latest enhancements for modern browsers.

Progressive enhancement doesn’t mean you have to provide all the same functionality to everyone – quite the opposite. That’s why it’s key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you’re free to add many more features that aren’t mission-critical. You should reward more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML.

Like I said, progressive enhancement isn’t a technology. It’s a way of thinking. Once you start thinking this way, you’ll be prepared for whatever the next ten years throws at us.

About the author

Jeremy Keith lives in Brighton, England where he makes websites with the splendid design agency Clearleft. You may know him from such books as DOM Scripting, Bulletproof Ajax, HTML5 For Web Designers, Resilient Web Design, and, most recently, Going Offline.


He curated the dConstruct conference for a number of years as well as Brighton SF, and he organised the world’s first Science Hack Day. He also made the website Huffduffer to allow people to make podcasts of found sounds—it’s like Instapaper for audio files.


Hailing from Erin’s green shores, Jeremy maintains his link to Irish traditional music running the community site The Session. He also indulges a darker side of his bouzouki-playing in the band Salter Cane.


Jeremy spends most of his time goofing off on the internet, documenting his time-wasting on adactio.com, where he has been writing for over fifteen years.

More articles by Jeremy

Comments