Adaptive Images for Responsive Designs


Comments are ordered by helpfulness, as indicated by you. Help us pick out the gems and discourage asshattery by voting on notable comments.

Got something to add? You can leave a comment below.

Barnaby Walters

Whilst you could go with a sub domain route it’s possibly less elegant, and less future proof.

Okay, this suggestion doesn’t sort out the speed problem or the cookie problem, but it could sort out the REST problem.

Set up a system similar to codeigniter’s, where everything happens like this:

Except use a rewriterule to remove the index.php, giving what look like sensible URLs.

You could do this same technique with the adaptive images script, putting it in an img directory as index.php, then requesting images from that directory. Rewriterule would rewrite the URLs from

To (on the server side)

This script could then do the resizing based on the size in the cookie, whilst preserving real URIs.

I did see someone propose an addition to the HTML spec: the picture tag, which works like video or audio where you can specify different image sources. In an ideal world, instead of a stupid UA string, browsers would send a dictionary of stuff like screen size, connection speed etc to the server side.

Matt Wilcox

Hi guys, thanks for the feedback :)

I’m looking forward to seeing “The Future” section of the article addressed and talked about more. All current mechanisms to deal with this issue are hacks that harm some element of the web-design stack somewhere.

@Peter Lejeck

That’s not a good solution either, because as mentioned in the article you will download the image twice, once at mobile reslution and once at desktop. That’s wy pure JS solutions will not work.

@Chris Heilmann

The instructions here say 755, not 777 – the 777 was a mistake to have left in the downloadable instructions. Thank’s for pointing that out! 755 will not leave your folder open to be writable. I’ll be sure to mention in future documentation that people may want to point the ai-cache folder outside of web root :)

@Sripathi Krishnan

Could you clarify why 1. is a point at all? How is it less reliable?

Point 2 is mentioned in the article, but is of no relevance at all to people who aren’t using CDNs. Also, I’d wager that it’s faster to download a 1/4 size image from local than a 4x bigger image from a CDN. CDN’s improve matters but it’s latency as much as anything, right?

As for REST, yep, i do share that “ick” element. No solution is perfect, they all stumble badly somewhere – and AI is not always going to be the most appropriate solution :)

Jason Grigsby

@Nicolas Chevallier,

Like Matt said, I’d love to see your alternative.

I did a lot of research into this problem. The issues are outlined here:

I examined nineteen different techniques as part of my research on responsive images:

A Google Spreadsheet containing all of the different techniques and my analysis using a proxy server to watch asset loading can be found at the bottom of the second link.

It is entirely possible that everyone has missed a simple and obvious solution. If so, please publish it. We’d welcome it and thank you profusely.

But given the amount of time people have put into solving this problem, it is likely that there isn’t a great solution.

Matt’s solution may be a hack, but that’s all we’ve got at the moment. Adaptive images is one of the best solutions I reviewed and probably the easiest for people to implement.

Thanks Matt for a great article and for continuing to try to improve techniques for images in responsive designs.


Matt Wilcox

I completely agree that other approaches will be appropriate for certain types of site. But I think one of the things that developers keep missing about a lot of approaches that get suggested as solutions is who actually supplies content for websites.

It is rare that the people putting the content in are web developers. They don’t have the knowledge that we do, and CMS’s are not geared to this. Can you imagine trying to teach a regular Joe working at a magazine, or council administration, or the local school, that in order to put one picture into an article they have to upload three or four pictures, and then code the image tag using three or four (to them) weird text strings?

One of the major advantages of the AI approach is that none of that is an issue. You don’t need to do any of that because the process is entirely automated, and better yet are generated only on demand. If no-one ever visits My Big Fat Article using a mobile phone, then the mobile resolution versions of the images are never generated. But if someone does, boom – they’re there.

And while there are definately aspects of AI that I don’t like (the fact that the image URL may deliver one of any number of scaled images being my biggest bug-bear), I can’t see the mark-up solutions being practical for non-developers, and the idea that we just supply a tiny 1px GIF for all images until some JS is run feels even worse.

At least with the AI approach the semantic value of the image is retained. All it is really doing is delivering different scaled versions of the same thing.

What’s so interesting to me isn’t so much the flaws that all current solutions have, but the fact that it’s so difficult to come up with an ideal solution at all. I remain convinced that it has to be a two-fold solution, one server side and one client side. Because they solve different issues, despite seeming at first glance to be doing the same job.

Chris Heilmann

I filed a bug on Github about this, too. It is very dangerous to create a folder on your system that is read/write/execute all. Any attacker could run their own scripts in this folder and turn your server into a zombie or simply send spam out from it.

Better to use the /tmp of the server or make the cache folder only writable for scripts from the same source.

Sripathi Krishnan

This isn’t an architecture I would recommend for any website.

1. Using PHP to serve images is much slower than letting Apache do it for you. Its also far less reliable.
2. It completely breaks the CDN. I’d rather the user downloads a bigger image from the CDN, than a smaller, non-cacheable image from the origin.
3. It breaks REST principles. You want the URI to uniquely identify the image. With this approach, the cookie + uri determines the image.

I am afraid this is striving for Fluid design at the cost of performance and system architecture.

A better solution is perhaps to redirect the user to a domain such as for mobiles or for tablets. Then use PHP to check the domain (instead of the cookie) and serve resized images.

The advantages are several – CDN and cache friendliness, REST urls, and an easy path to serve different markup in the future if required.

Drew McLellan

Peter – of course it’s a bit of a hack – it has to be. There’s no specified way to deal with what is a essentially a new problem. That what the web does – we find a problem, hack and solution together, evaluate it, refine the idea, hack together a new solution and so on.

The reason for using server-side code (be it PHP or whatever) is that image files are being generated to match the breakpoints as needed. You can’t achieve that with client-side JavaScript.

Drew McLellan

Sripathi – I think the REST point is an important one, and a concern I share. Each unique image file should have a unique URI, so it would be ideal if a new URL could be generated client-side before the request is made.

I’m not sure the sub-domain for mobile, another for tablets really works, as what size is a mobile or tablet screen? Most single devices have two sizes at least. Would seem more appropriate to include the breakpoint size, if anything.

Sort out the REST issue and the CDN issue sorts itself.

The whole adaptive images issue is a difficult problem that’s not yet solved, so the more discussion and sharing of ideas the better.

Matt Wilcox

With regard to the REST problem. I’d been mulling over an idea that someone else had suggested but wasn’t convinced it would work well. So, while people are reading could you please give your thoughts on this…

Instead of the script serving the adapted image straight up, it could send a header to say the image has temporarily moved, and ask the browser to request the device-sized url instead.

i.e., instead of sending /480/image.jpg when the page had requested /image.jpg is there any milage in the idea of sending a header back to the browser to say “actually, go download /480/image.jpg” instead. I’m not sure how this would work with proxies and caches.

Nicolas Chevallier

It’s a great idea but the implementation doesn’t suit me (as others apparently). Having to sniff information used on the client I have a problem and poses more problems than it solves. There is probably a way to implement this solution smarter.

Drew McLellan

Roman – if you had kept reading, you would have seen that Matt outlines the technique in detail, so that it would be easy to implement in whichever server-side technology you use.

You would have also seen that it’s CC licensed (so no IP worries in transcoding it) and there are already ports to other languages.

Perhaps your time would be better spent reading on, rather than leaving acidic “fail” comments.

Matt Wilcox

@Nicolas Chevallier

That’s fair enough :) I’d be interested to know how this solution makes more problems than it solves though, and how to implement it better – please let me know if you have any ideas.


“I stopped reading when author assumed I can use PHP in my project.”

What did you expect the author to do, write an implementation in every server side language under the sun?!

Sripathi Krishnan


More info on the redirect to sub-domain approach :

Step 1 : The first trick is to use WURFL ( WURFL is a database that maps user agent strings to device capabilities. This means you can figure out the device or browsers aspect ratio server side on the first request.

Step 2 : Next, figure out how many different image resolutions you want to support, and then create a sub-domain for each resolution. Typical would be mobile, tablet, wired etc. But you are free to create as many sub-domains you want.

Step 3 : Then, redirect the user to the appropriate sub-domain on the first request.

Step 4 : Finally, in Apache, create virtual hosts for each sub-domain. So Apache will serve both and from different folders. No PHP required. Also, since the URL is unique, you can put strong cache headers.

Sripathi Krishnan

>> Point 2 is mentioned in the article, but is of no relevance at all to people who aren’t using CDNs

Not entirely. You set the header Cache-Control:private, which defeats caching at all levels between the server and the browser.

For example, if you are behind a corporate proxy, those images won’t get cached. If you are hosting on Google App Engine, Google’s front end servers will not cache your images, costing you more money. ISPs may have their own caches – and they wouldn’t be able to cache either.

Nicolas Chevallier

I think we couldn’t rely on cookies or server sniffing in general. In terms of performance, cookie will slow down all future requests since the cookie will be sent with each request. Putting the javascrpit in the head tag is in my opinion also a bad thing. But I am sure that this technique can be implemented differently :)


Thanks for the clear explanation of the technique. Seems like a pretty good solution, useful in certain projects (as you mention, depending on what kind of setup, cms, etc there is).

Thanks to people like you Matt experimenting with new techniques, improving existing techniques, etc that we learn a lot and make progress.

Andrew Woods

It’s a great start to solving a new problem. It’s not a perfect solution, but it doesn’t claim to be. What I like about this solution though, is the impact on the designer is negligible. Also, the tech already exists, so there’s no major hurdles once the issues get addressed.

Thanks for your effort, Matt.

A possible improvement is to use Cron to check for images of your required sizes and generate them if need be. this way you dont have the security issue of the web server user generating images. Also this should be more compatible with a cdn. How often you want cron to run depends on how busy your site is and how often the images change.

Paul Cripps

Hi Matt,

Great article, we’ve been thinking about this a lot recently. I think the some of these comments have valid points, but I think each one could differ based on the websites requirements.

What I’m loving is it’s a great idea and its a starting point further discussions, thoughts and ideas and as @MATTHIJS pointed out thanks four pushing the boundaries for us all…. I hope in time greater and stronger solutions will emerge.

Thanks Matt.

Mike Gossmann

The more I read about solutions like this, and the more I look into solutions to the whole “responsive image” problem, and the more I see people poke holes in the solutions, the more I start to wonder if maybe we’ve defined the problem wrong.

If an image is coming in through an img tag, then isn’t part of the content? Arguably it’s part of what the user cam to the page to see. So why should we give less of this content to the mobile users?

Less Bandwidth Then why aren’t we sending them only half the text too? The user is here for the content. If the image is worth the bandwidth of including, then it’s worth the bandwidth of including the whole thing. Just make sure it’s properly compressed so you’re not wasting the desktop user’s bandwidth either.

Less Screen Spaces This applies to the text as well. Only the first couple paragraphs fit on the screen. Until the user scrolls. Just like they don’t see the whole image until they zoom. Even if they don’t zoom, a retina display (or equivalent) will still show every pixel of that image until you scale it below 50%.

So if the 320 pixel-wide image is enough to get the point across, why are you wasting the bandwidth and screen real-estate of the desktop user with the larger image?

I think the real problem might be that there’s still a lot of images out there that are tightly related to the content, but at the end of the day are really just decorative. There to break up the text, but not to illustrate a point.

I’ll probably catch a lot of flack for this, but I’m starting to wonder if the solution to the problem of these images that walk the line between content and decoration is to use code that does the same: smart and careful use of inline styles. Set almost everything about that type of image up using the stylesheet, and then include the actual image URL in a style tag, as part of the content and controllable through media queries. Or maybe something similar using the new data- attributes. Also, not a word about this making redesigns harder, these were img tags a minute ago.


Thanks for this idea to solve a ‘real world’ issue, it has many salient points and solid strategic thinking.

If nothing else this article identifies and tries to solve the real problem of devices having to deal with unnecessary downloads and excessively large images

In dealing with the moving target of our reality as developers, i find this to be a very positive suggestion.

This may not be the ‘perfect solution’ for all, but it is indeed a great step in the right direction.

Thanks for taking the time to put this together;

… and here’s to keeping the ball moving forward


Darren Miller

There are a lot of strong opinions on this issue. I guess that reflects the fact – as many have stated – that we’re in territory that the W3C and browser makers are not catering for. Until they do I believe it’s a question of picking the appropriate technique for the needs of the project at hand.

For my two cents, I favour the data attribute approach. That is, form your images as follows:

<img src=“default.jpg” data-med-src=“meduim.jpg” data-lrg-src=“large.jpg” alt=“image description” class=“repsonsive-image.jpg” />

Then loop all the responsive-images really early using JavaScript to load the appropriate source. The advantages of this approach are:

* It’s clear from the markup what images are being used, and all have unique URLs

* It puts control of the display in the hands of the designer or content author. Alternatively the web application can generate static images at time of publishing (a definite improvement over on-the-fly)

* Images can be changed on resize in addition to page load. Many techniques miss this trick and for me it’s a biggie. I don’t know of another way to catch an orientation change – and that may well require a different image.

* There are many options for fallback: leave out the src attribute, default to low res src or even a noscript

I also suspect this approach is the closest in concept to an eventual browser-based solution. I did my own research on the subject recently which demonstrates the above technique. It needs some refinement though and some of the links above are a bit more mature.

Matt Wilcox

Also, thanks everyone for all of the feedback :) I’ve not yet had chance to reply to all the points I’d want to, but you’ve all given me food for thought and it’s great to see people thinking about the issues :)

Geraint Hywel

The new cookie law in the UK might be a problem. Whilst I don’t think anyone has been prosecuted for not getting consent before setting cookies, it does make me reluctant to get very invested in anything new that relies on cookies.

I have wondered if it might be possible to use node.js to avoid the cookie problem. Because we can use setTimeout() on the server, it might be possible to delay responding to image requests until the device has had a chance to tell the server its capabilities (in a separate, JavaScript initiated request). Once the server has been notified of capabilities, it just returns a redirect to the appropriate URI (great idea @MattWilcox).

How the relevant images get created is less important. Clearly, PHP/GD is fine and dandy in many scenarios.

Nate Klaiber

I have a question in regards to separation and ownership. I think other’s have covered some of the other dangers, but I also agree with Drew that those can get ironed out over time. I am glad you put options out.

1. Who own’s the media query breakpoints? What if a member of the UX team changes or adds to those breakpoints in the CSS files (owner), won’t they have to remember to update the in the PHP file?

2. Precompiling. I think since you know the possible breakpoints up front, that you could write some PHP to precompile (cache) all possible assets, which means all requests will hit the cached image. You won’t be using PHP to re-size on the fly (which could negate the point of it taking less time – as it would take less time to download two images then have PHP write new images files on requests)

Thanks for showing us your thought process.

Matt Wilcox

The UK Cookie thing is bull. I can’t see it ever being enforced because if it were most websites would never work. Google Analytics for example would be unable to be used by any site inside the UK, and imagine how hard businesses will cry out if they are no longer able to collect analytics stats. It’s a short-sighted law made by people hell bent on “the privacy issue” but don’t actually understand the implications of such a law. Heck, it’s issued by people incompetent enough to link to the actual regulations as a PDF instead of in HTML.

I don’t know anything about node.js, sounds intriguing :)

John B

@Geraint & Coolie Law-writers: I didn’t know about that cookie law, and yes, it’s bull. Most websites these days set some cookies as a matter of course, (session cookies or whatever).

@Sripathi: As nice as it would be to rely on WURFL, they’ve gone commercial, so to use it on a business’s website, (I’m not sure about commercial websites), costs US$1500.

@Matt, now for the main part of my comment, I think that you missed an opportunity in your comment about using a redirect header to solve the REST problem. This could probably be used to solve the CDN problem as well, if the flow goes something like this:

1) The request for /images/good-looking.jpg is rewritten to adaptive-images.php by the rewriterule in .htaccess

2) adaptive-images.php does its magic, but instead of returning image data to the browser it sends a 301 or 302 redirect header to that image’s location on a CDN.

3) The CDN then uses the /ai-cache/ folder on your webserver as its origin and serves the properly-resized image, but through the CDN.

That means that there’s an extra whole HTTP request, with DNS lookups and all that, but if the cache-control headers are done right, you may get really good cacheing on the browser, and I’m not sure if the extra DNS lookup is much slower than sending image data using PHP.

As a side-note, it doesn’t seem like textile’s numbered lists are working in the comments.

Matt Wilcox

@John B

That’s pretty much what I was thinking but I don’t know very much about CDNs so I wasn’t sure quite how it’d work. For example, let’s assume that someone’s proxy is looking at an AI enabled site – what size image does the proxy get? If that proxy machine is fed a 301/302 redirect, does it simply cache the redirected image instead of the source URL? If so, then it’s broken and won’t work in the same way it’d be broken if you let any intermediate cache store an AI image. I just don’t think that there’s a way to make AI work with CDNs/Proxies because one of the design decisions was that the mark-up doesn’t change. Which in turn means the request URI doesn’t change. Which in turn means any cache that isn’t the end user’s will be wrong most of the time.

I’m not a PHP expert by any means (until AI it had been 6yrs since I did any PHP), and I don’t understand why images served through it are seen to be ‘slow’. If the image exists cached, all PHP is doing is flinging the image out. If the slowness is simply instantiating the PHP then it’s already been done through the .htaccess and there’s no real penalty from serving an image vs having the PHP issue a HTTP redirect (which the browser then has to deal with). Also, the PHP is required to check that the cached version of the image is newer than the source file – otherwise AI could serve outdated cached images in the event that someone changed the source image after the cache had been generated.

Wouldn’t it be the case that HTTP redirects are going to be considerably slower than simply supplying an image? We’d double the humber of HTTP calls and that means a lot of latency, especially on mobile networks where it’s not so much bandwidth that’s a problem as latency. More HTTP calls would surely be worse? Isn’t that why we package up libraries and use CSS sprites?

I think there are a couple of potential options, but without some serious testing it’s hard to know which is better. My gut is telling me that redirects won’t work out too well, but I would love to be better informed.

As a side-thought (and philosophical question): is it really a bad thing to send different sized versions of the same content from one URL? It’s not ideal, but is it that bad? We are sending the same semantic content after all, it’s not like we’re sending a semantically different document based on the size of the users screen. Likewise, when exactly would someone need or want to save a different sized version of an image than the one which they were already looking at? Does it actually matter?

John B

I’m not an expert at all when it comes to proxies and caching, what I would hope is that we could get it so that browsers would cache the image, but proxies wouldn’t. However at that point I’m not sure if there’s much use for a CDN, since the “CDN” would be the browser cache.

For PHP speed, I haven’t done much measurement of PHP throughput when just grabbing an existing file and sending its contents. I would expect some sort of performance hit compared to simply downloading a static image file because of the extra overhead of PHP in the request, but I don’t know how bad it would be. Some testing would have to be done. I also hadn’t thought of the increased latency on mobile networks, that probably makes it so we don’t want to redirect, if at all possible.

Another thought I just had is that it might be possible to make the .htaccess file/apache check for a cached image file and avoid firing up PHP altogether if a cached file exists. That would take some good .htaccess-fu, but it may be possible.

As for the philosophical question there are two ways of looking at it. One is that the content is the same, (a photo of me, or whatever), so so long as the URL of /photos/john.jpg always returns the same photo of me it doesn’t matter what happens behind the scenes. The flipside to that argument is that if the images are different sizes then they’re not the same image, so should have different URLs. The developer in me thinks the images are different, but the pragmatic approach says they’re the same. Really, I don’t think that a website visitor will care either way so long as he can see what he wants to quickly, without incurring a huge bandwidth bill.

Matt Wilcox

@ John B

AI already instructs proxies to not cache and browsers to cache – so I don’t think there’s any advantage (and likely a disadvantage) to punting the image request to a CDN. The disadvantage would be you’re then relying on an additional provider to send an image and having to wait for access to an external resource. The only possible advantage I can see is not using your own server’s space – otherwise it’s always going to be slower than using a local file?

I agree, using PHP to pipe an image will be slower than just loading the image, but AI’s already going to use PHP to select the right image, so it doesn’t make much difference? As you say – if there’s a way to get the .htaccess to do it without firing up PHP that’d be beneficial, but I just don’t think it’s possible because the PHP has to do so much that the htaccess couldn’t (does the file exist at that resolution? If not does the source file exist? If so should it be downscaled? Do the downscale & save the file. If it does exist at that location anyway is it newer than the original file (i.e., not stale)) – there’s no way you can do that stuff without PHP (or Ruby, or however it’s implemented).

As for the last paragraph – absolutely with you on that too! A little bit torn, but my pragmatic side is saying that it’s not really an issue.

Thanks for the feedback and giving this a good think over – just the sort of thing I was hoping for :D

Robin Winslow

Two issues mentioned above I was particularly interested in (originally raised by @SRIPATHI KRISHNAN):

1) Adaptive Images breaks CDNs
2) Adaptive Images breaks REST

Both of these can be solved, I believe:

1. Some CDNs, I understand, can cache resources based not only on URL but also on HTTP headers (afraid I don’t yet have a citation for this). So you could presumably specify that the CDN should cache different versions of the file depending on the “device-width” cookie.

2. If you have a URL like which then redirects to then you preserve the integrity of the URL for the image resources. The CDN can then cache the different redirect response headers as described above.

I was also very interested in @PETER LEJECK’s idea of serving the mobile images to start with and replacing it with JavaScript after the page has loaded based on screen size. Yes you end up loading the image twice, but the first time it’s only the mobile version which is quite small and at least the user will see a low-quality image immediately, so the extra time taken to download the larger image won’t break the user experience that badly, and it allows you to request image sizes much more specifically than adaptive images does – since adaptive images works based on the width of the view-port, not the width of the actual placement the image is to be displayed within. Plus this would be a completely client-side solution, which is simpler to implement.

I also really like @SRIPATHI KRISHNAN’s idea of using or a similar web service to tell you about devices by user-string. This would then be a completely server-side solution, to contrast nicely with @PETER’s.

Matt Wilcox

@Robin Winslow

I’d love to know more about your first point – whether that’s a setting I can send in headers, and if so, how. I’m not at all up on how CDNs actually work – I am a front-end developer mainly! Thanks for that info :)

As for the sending a re-direct or re-loading images via JS… I’m still not sold on that solution. It would work I’m sure, I’m just not convinced it’s a better thing than how it currently works. On mobile networks especially the killer is often network latency which is far worse than the bandwidth (which is also poor). I think by doubling the number of HTTP requests you’d slow sites down too much, to the point where you may as well not bother with the adaptive solution at all.

I’d toyed with @Sripathi’s idea before too, but can’t bring myself to rely on it. If people want to fork AI and go that route they are very welcome to do so :) That also reminds me, I never did address @Sripathi’s posts – sorry man, they’re good points well made.

Matt Wilcox

After a bit of research it looks like the Vary header won’t allow you to specify precicely which cookie is the one to watch, which means it’s not a viable solution. Any cookie change on the same domain would trigger a cache re-request :( For the moment, it looks like cache-control: private is the only solution that will allow AI to work for the end user and not let proxies get in the way.

Adam Norwood

I love the 24ways time of year, it gets everyone talking (or arguing) about some interesting ideas!

The suggestions about having the HTML spec allow for multiple image sizes (perhaps through a hypothetical <picture> tag to match <video>) and about defaulting to the smaller mobile size with a JavaScript swap, reminds me of the lowsrc attribute that used to be available on the img tag in browsers of yesteryear (it was first in Navigator, then IE4 followed suit, if memory serves).

I don’t think lowsrc ever made it to the official HTML spec (it’s not in HTML 4.01, but it is in the list of “obsolete” attributes in the HTML 5 spec, and browser support beyond NS4 and IE4 was spotty. In hindsight it sort of made sense, especially as we’re again looking for ways to provide a pleasing experience to users with a narrow connection (in the 90s it was dial-up slowness, now it’s visual real estate and mobile device slowness). A new version would need to be implemented in a way that’s more flexible than the “low-res / hi-res” binary choice that lowsrc added, browsers would need to know which file to actually download (lowsrc-enabled browsers would always download both resources), and there would need to be browser buy-in, so it might be a long while. In the meantime I like Darren Miller’s suggestion of using data- attributes to enhance the image tag as needed, although you might run into doubled-up resource loading if you don’t have the JS firing at exactly the right time (?).

For what it’s worth, I lean towards the argument of URLs needing to be unique for the sake of REST principles, but that does get into philosophical territory (e.g. web pages are also supposed to be served up by a URI, but depending on a visitor’s capabilities, cookies, etc., they likely do get different content, ads, features, authentication, etc.) But in case someone reshuffles this technique to use unique URLs for each generated image, I think you could add the following to your .htaccess rewrite condition stack to determine whether an image for a certain display size has already been generated:

RewriteCond %{REQUEST_FILENAME} !-f

(apologies if I’m misreading how the current setup works, I haven’t tried it out yet…)

Geraint Hywel

FWIW, the REST issue might not be a real problem after all.

According to the section of RFC2616 that deals with content negotiation:

Server-driven negotiation is advantageous when the algorithm for selecting from among the available representations is difficult to describe to the user agent, or when the server desires to send its “best guess” to the client along with the first response (hoping to avoid the round-trip delay of a subsequent request if the “best guess” is good enough for the user).

(my emphasis)

My reading of that is that its OK to serve different resolutions (or “representations”) from a single URI.

HTTP Status Code 300 looks like an ideal response:

The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location.

I guess this would entail the same additional HTTP requests that 302’s would.

Some very quick checking saw Firefox 8 deal with a 300 + location header sensibly (redirecting to the specified “default” location).

Chrome doesn’t redirect to that location, at which point this felt like a non-starter.


Re: CDN, I have rsync setup between my image cache directory and an image directory on the CDN. When the script detects an image that has already been cached, it returns the URL to the resource on the CDN instead of the origin server. The challenge was it also had to check timestamp and continue to serve from origin until the file was at least 5 minutes old.

I do that on a site that gets about 60 million hits a year and it works fine. I wish there was a better way, though.

Peter Vincent

I think that this is pretty neat and I am going to try it on the website for my club.

I liked the demo on but for the life of me I can not see anywhere on the page what image file size is actually served. I did look at the demo on my iPhone and saved the images to photos and then emailed them to myself. It appears that either iPhone Safari is served full size when the action is “save” or else iPhone Safari does not work with your ingenious solution.

May I respectfully suggest that you provide a way to display in the browser what size file is served so that the iPhone will show whether it is working as you intended. I wonder is the code located at useful in this regard?

Best regards and thank you

Peter Lejeck

This is an awful hack, and the fact that it uses PHP pains me to say the least. While perfectly well-intentioned, it is hacky and could be done in JS; use an img that’s already mobile size then swap to desktop size in JS.

Impress us

Be friendly / use Textile