Friday, May 22, 2009

ClearType, or: Welcome to the State of the Art for the Year 2000

FONTS WEEK!

The moral of Fonts Week, if there is one, is that typography is really, really complicated.

For example, which do you optimize for: reading on the screen or reading on paper? Traditionally, of course, there was no contest - fonts needed to look good on paper, and nothing else. Even when computers started displaying real fonts on the screen in the late 80s, the focus was still on the page - so the mark of a "good" font display was how closely the text on the screen matched the way it was going to look on the page.

Take Times New Roman, for instance. TNR, that stock font for college essays everywhere, was originally commissioned by the British newspaper The Times in 1931, and was their standard text font from then through the early 70s. While The Times itself doesn't use the font any longer, it's become a standard baseline for pretty much any print that needs to look good.

The problem is that it just doesn't look all that great on the screen. The issue is one of resolution - TNR (and most other fonts) are intended to look good at print resolution - 300 or 600 dpi. Computer displays top out, generally speaking, at about 96 dpi, which means that in a best case scenario you're seeing about 1/3 of the information in any given TNR letter.

A brief review, at this point, how computer displays work. A computer screen has a finine number of pixels arranged in a grid - each of which can be a single color at any given time. Since each pixel is a square, this is what gives computer displays that slightly blocky look - your standard XGA screen is 1024 pixels wide by 768 pixels tall, which really isn't all that many. If you imagine the screen as a sheet of graph paper where you have to fill in a selection of squares to make an image, it's easy to see how you end up with jagged edges, since each block can be either on or off. The more pixels and the more colors you have reduce the problem, but this is still a fundamental aspect of how computers display images.

The first batch of technology to reduce this effect was anti-aliasing. Simply put, after something is drawn into the pixel grid, AA does additional passes to draw in fainter pixels around the jagged edges, making the image appear smoother.

The problem gets exacerbated with text, since text is teensy. Anti-aliasing needs a fair amount of space to work freely, and a single letter of text on the screen just doesn't have all that many pixlels. For example, depending on your settings, the letter i is probably only a pixel or two wide. As 2x2 pixels, there aren't a lot of ways to make the dot over the i look like anything but a tiny square.

One of the approaches was to build new fonts. As part of their Core Fonts for the Web, Microsoft introduced two new typefaces: Georgia and Verdana, which were intended to be replacements for Times New Roman and Arial, respectively, but optimized for viewing on the screen, and therefore cared more about how they parsed into the pixel grid then how they looked on the page. This is why the two of them are very easy to read, but are also tremendously boring to look at when printed out. Assuming you have all the right fonts installed, Georgia and Verdana should be a little clearer to read:
  • This (should be) in Arial
  • This (should be) in Verdana
  • This (should be) in Times New Roman
  • This (should be) in Georgia
Still, new fonts don't really solve the problem. What about desktop publishing, for example? We want our fancy, nice-looking fonts to show up on the page, but also to look like they'll show up on the page on the screen.

Then, when LCD screens came along, suddenly a cool new option presented itself: subpixel rendering. LCD screen don't work quite like the older CRT tubes - on an LCD, each pixel is actually three segments, one each for red, green, and blue. Witty programmers suddenly thought - hey, we can effectively get a times three increase in resolution for our anti-aliasing routines as long as we don't care all that much about what color the pixels around the edge are. The good news is that at the tiny sizes we're talking about, color really doesn't matter - the human eye just isn't that good. So, if taking a black letter and adding a red-black edge to it makes it look smoother, the eye just blends the colors together but still sees the shape.

As usual, this is the point where Microsoft and Apple started moving in totally different directions. Apple wrote their whole subpixel engine around making the fonts on the screen look as much like the designer wanted them to look as possible. Microsoft went the other way, and used their tech to jam the font characters into the pixel grid, fidelity to the designer be damned. Therefore, we end up with a world where Microsoft systems are easier to read on the screen but look different when printed out, and Apple systems look the same on the screen as on the page, at the cost of some readability on the screen. Joel "Joel On Software" Spolsky has an excellent article comparing the two approaches, complete with some Apple vs. MS graphics.

However, there's an extra wrinkle. Microsoft named their subpixel technology "ClearType" when they launched it in 2000 as part of the Microsoft Reader ebook platform. It later made it's way in to the OS itself as of Windows XP. However, while it's fully operational in Vista out of the box, it's turned off by default in Windows XP, despite being fully present. I just found this out myself, I had always assumed that ClearType was turned on and just didn't look very good on my XP machine. So here, if you're running XP on an LCD screen, try this experiment:

If you're anything like me, you've got a windows desktop full of icons. Crack open a Windows Explorer or My Computer window such that you have some text in that window and can still see the icon labels on the desktop. Right click on the desktop and choose properties. Go to Appearance, then click on Effects. In the resulting dialog box, there should be a pull-down menu under "Use the following method to smooth the edges of screen fonts". The selected option is probably "Standard". Change that to ClearType. Hit Okay until you're out of all the dialog boxes.

How much better does the screen look now? On my two machines the difference was pretty incredible. Turns out that feature has been in XP since day one, and I only just discovered it this week, eight years later. Dang.

But wait, there's more! Since ClearType is really just using a function of the way LCD screens work to do an elaborate optical illusion, there's a whole lot of ways to make it look BAD. With that in mind, Microsoft Typography released the ClearType Tuner PowerToy, which adds a bunch of ways to dial your ClearType settings in to your specific monitor. Check it out. Personally, I'm sold.

No comments: