Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 4 hours 23 min ago

Offline *Only* Viewing

Thu, 02/08/2018 - 19:42

It made the rounds a while back that Chris Bolin built a page of his personal website that could only be viewed while you are offline.

This page itself is an experiment in that vein: What if certain content required us to disconnect? What if readers had access to that glorious focus that makes devouring a novel for hours at a time so satisfying? What if creators could pair that with the power of modern devices? Our phones and laptops are amazing platforms for inventive content—if only we could harness our own attention.

Now Bolin has a whole magazine around this same concept called The Disconnect!

The Disconnect is an offline-only, digital magazine of commentary, fiction, and poetry. Each issue forces you to disconnect from the internet, giving you a break from constant distractions and relentless advertisements.

I believe it's some Service Worker trickery to serve different files depending on the state of the network. Usually, Service Workers are meant to serve cached files when the network is off or slow such as to make the website continue to work. This flips that logic on its head, preventing files from being served until the network is off.

Offline *Only* Viewing is a post from CSS-Tricks

Wufoo Forms Integrate With Everything

Thu, 02/08/2018 - 14:29

(This is a sponsored post.)

Wufoo helps you build forms you can put on any website. There's a million reasons you might need to do that, from the humble contact form, to a sales lead generation form, to a sales or registration form.

That's powerful and useful all by itself. But Wufoo is even more powerful when you consider that it integrates with over 1,000 other web services.

Direct Link to ArticlePermalink

Wufoo Forms Integrate With Everything is a post from CSS-Tricks

Using Default Parameters in ES6

Thu, 02/08/2018 - 14:13

I’ve recently begun doing more research into what’s new in JavaScript, catching up on a lot of the new features and syntax improvements that have been included in ES6 (i.e. ES2015 and later).

You’ve likely heard about and started using the usual stuff: arrow functions, let and const, rest and spread operators, and so on. One feature, however, that caught my attention is the use of default parameters in functions, which is now an official ES6+ feature. This is the ability to have your functions initialize parameters with default values even if the function call doesn’t include them.

The feature itself is pretty straightforward in its simplest form, but there are quite a few subtleties and gotchas that you’ll want to note, which I’ll try to make clear in this post with some code examples and demos.

Default Parameters in ES5 and Earlier

A function that automatically provides default values for undeclared parameters can be a beneficial safeguard for your programs, and this is nothing new.

Prior to ES6, you may have seen or used a pattern like this one:

function getInfo (name, year, color) { year = (typeof year !== 'undefined') ? year : 2018; color = (typeof color !== 'undefined') ? color : 'Blue'; // remainder of the function... }

In this instance, the getInfo() function has only one mandatory parameter: name. The year and color parameters are optional, so if they’re not provided as arguments when getInfo() is called, they’ll be assigned default values:

getInfo('Chevy', 1957, 'Green'); getInfo('Benz', 1965); // default for color is "Blue" getInfo('Honda'); // defaults are 2018 and "Blue"

Try it on CodePen

Without this kind of check and safeguard in place, any uninitiated parameters would default to a value of undefined, which is usually not desired.

You could also use a truthy/falsy pattern to check for parameters that don’t have values:

function getInfo (name, year, color) { year = year || 2018; color = color || 'Blue'; // remainder of the function... }

But this may cause problems in some cases. In the above example, if you pass in a value of “0” for the year, the default 2018 will override it because 0 evaluates as falsy. In this specific example, it’s unlikely you’d be concerned about that, but there are many cases where your app might want to accept a value of 0 as a valid number rather than a falsy value.

Try it on CodePen

Of course, even with the typeof pattern, you may have to do further checks to have a truly bulletproof solution. For example, you might expect an optional callback function as a parameter. In that case, checking against undefined alone wouldn’t suffice. You’d also have to check if the passed-in value is a valid function.

So that’s a bit of a summary covering how we handled default parameters prior to ES6. Let’s look at a much better way.

Default Parameters in ES6

If your app requires that you use pre-ES6 features for legacy reasons or because of browser support, then you might have to do something similar to what I’ve described above. But ES6 has made this much easier. Here’s how to define default parameter values in ES6 and beyond:

function getInfo (name, year = 2018, color = 'blue') { // function body here... }

Try it on CodePen

It’s that simple.

If year and color values are passed into the function call, the values passed in as arguments will supersede the ones defined as parameters in the function definition. This works exactly the same way as with the ES5 patterns, but without all that extra code. Much easier to maintain, and much easier to read.

This feature can be used for any of the parameters in the function head, so you could set a default for the first parameter along with two other expected values that don’t have defaults:

function getInfo (name = 'Pat', year, color) { // function body here... } Dealing With Omitted Values

Note that—in a case like the one above—if you wanted to omit the optional name argument (thus using the default) while including a year and color, you’d have to pass in undefined as a placeholder for the first argument:

getInfo(undefined, 1995, 'Orange');

If you don’t do this, then logically the first value will always be assumed to be name.

The same would apply if you wanted to omit the year argument (the second one) while including the other two (assuming, of course, the second parameter is optional):

getInfo('Charlie', undefined, 'Pink');

I should also note that the following may produce unexpected results:

function getInfo (name, year = 1965, color = 'blue') { console.log(year); // null } getInfo('Frankie', null, 'Purple');

Try it on CodePen

In this case, I’ve passed in the second argument as null, which might lead some to believe the year value inside the function should be 1965, which is the default. But this doesn’t happen, because null is considered a valid value. And this makes sense because, according to the spec, null is viewed by the JavaScript engine as the intentional absence of an object’s value, whereas undefined is viewed as something that happens incidentally (e.g. when a function doesn’t have a return value it returns undefined).

So make sure to use undefined and not null when you want the default value to be used. Of course, there might be cases where you want to use null and then deal with the null value within the function body, but you should be familiar with this distinction.

Default Parameter Values and the arguments Object

Another point worth mentioning here is in relation to the arguments object. The arguments object is an array-like object, accessible inside a function’s body, that represents the arguments passed to a function.

In non-strict mode, the arguments object reflects any changes made to the argument values inside the function body. For example:

function getInfo (name, year, color) { console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */ name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Jimmie", 1: 1987, 2: "Red" } */ } getInfo('Frankie', 1987, 'Red');

Try it on CodePen

Notice in the above example, if I change the values of the function’s parameters, those changes are reflected in the arguments object. This feature was viewed as more problematic than beneficial, so in strict mode the behavior is different:

function getInfo (name, year, color) { 'use strict'; name = 'Jimmie'; year = 1995; color = 'Orange'; console.log(arguments); /* [object Arguments] { 0: "Frankie", 1: 1987, 2: "Red" } */ } getInfo('Frankie', 1987, 'Red');

Try it on CodePen

As shown in the demo, in strict mode the arguments object retains its original values for the parameters.

That brings us to the use of default parameters. How does the arguments object behave when the default parameters feature is used? Take a look at the following code:

function getInfo (name, year = 1992, color = 'Blue') { console.log(arguments.length); // 1 console.log(year, color); // 1992 // "Blue" year = 1995; color = 'Orange'; console.log(arguments.length); // Still 1 console.log(arguments); /* [object Arguments] { 0: "Frankie" } */ console.log(year, color); // 1995 // "Orange" } getInfo('Frankie');

Try it on CodePen

There are a few things to note in this example.

First, the inclusion of default parameters doesn’t change the arguments object. So, as in this case, if I pass only one argument in the functional call, the arguments object will hold a single item—even with the default parameters present for the optional arguments.

Second, when default parameters are present, the arguments object will always behave the same way in strict mode and non-strict mode. The above example is in non-strict mode, which usually allows the arguments object to be modified. But this doesn’t happen. As you can see, the length of arguments remains the same after modifying the values. Also, when the object itself is logged, the name value is the only one present.

Expressions as Default Parameters

The default parameters feature is not limited to static values but can include an expression to be evaluated to determine the default value. Here’s an example to demonstrate a few things that are possible:

function getAmount() { return 100; } function getInfo (name, amount = getAmount(), color = name) { console.log(name, amount, color) } getInfo('Scarlet'); // "Scarlet" // 100 // "Scarlet" getInfo('Scarlet', 200); // "Scarlet" // 200 // "Scarlet" getInfo('Scarlet', 200, 'Pink'); // "Scarlet" // 200 // "Pink"

Try it on CodePen

There are a few things to take note of in the code above. First, I’m allowing the second parameter, when it’s not included in the function call, to be evaluated by means of the getAmount() function. This function will be called only if a second argument is not passed in. This is evident in the second getInfo() call and the subsequent log.

The next key point is that I can use a previous parameter as the default for another parameter. I’m not entirely sure how useful this would be, but it’s good to know it’s possible. As you can see in the above code, the getInfo() function sets the third parameter (color) to equal the first parameter’s value (name), if the third parameter is not included.

And of course, since it’s possible to use functions to determine default parameters, you can also pass an existing parameter into a function used as a later parameter, as in the following example:

function getFullPrice(price) { return (price * 1.13); } function getValue (price, pricePlusTax = getFullPrice(price)) { console.log(price.toFixed(2), pricePlusTax.toFixed(2)) } getValue(25); // "25.00" // "28.25" getValue(25, 30); // "25.00" // "28.25"

Try it on CodePen

In the above example, I’m doing a rudimentary tax calculation in the getFullPrice() function. When this function is called, it uses the existing price parameter as part of the pricePlusTax evaluation. As mentioned earlier, the getFullPrice() function is not called if a second argument is passed into getValue() (as demonstrated in the second getValue() call).

Two things to keep in mind with regards to the above. First, the function call in the default parameter expression needs to include the parentheses, otherwise you’ll receive a function reference rather than an evaluation of the function call.

Second, you can only reference previous parameters with default parameters. In other words, you can’t reference the second parameter as an argument in a function to determine the default of the first parameter:

// this won't work function getValue (pricePlusTax = getFullPrice(price), price) { console.log(price.toFixed(2), pricePlusTax.toFixed(2)) } getValue(25); // throws an error

Try it on CodePen

Similarly, as you would expect, you can’t access a variable defined inside the function body from a function parameter.


That should cover just about everything you’ll need to know to get the most out of using default parameters in your functions in ES6 and above. The feature itself is quite easy to use in its simplest form but, as I’ve discussed here, there are quite a few details worth understanding.

If you’d like to read more on this topic, here are some sources:

Using Default Parameters in ES6 is a post from CSS-Tricks

Fallbacks for Videos-as-Images

Wed, 02/07/2018 - 21:25

Safari 11.1 shipped a strange-but-very-useful feature: the ability to use a video source in the <img> tag. The idea is it does the same job as a GIF (silent, autoplaying, repeating), but with big performance gains. How big? "20x faster and decode 7x faster than the GIF equivalent," says Colin Bendell.

Not all browsers support this so, to do a fallback, the <picture> element is ready. Bruce Lawson shows how easy it can be:

<picture> <source type="video/mp4" srcset="adorable-cat.mp4"> <!-- perhaps even an animated WebP fallback here as well --> <img src="adorable-cat.gif" alt="adorable cat tears throat out of owner and eats his eyeballs"> </picture>

Šime Vidas notes you get wider browser support by using the <video> tag:

<video src="https://media.giphy.com/media/klIaoXlnH9TMY/giphy.mp4" muted autoplay loop playsinline></video>

But as Bendell noted, the performance benefits aren't there with video, notably the fact that video isn't helped out by the preloader. Sadly, <video> it is for now, as:

there is this nasty WebKit bug in Safari that causes the preloader to download the first <source> regardless of the mimetype declaration. The main DOM loader realizes the error and selects the correct one. However, the damage will be done. The preloader squanders its opportunity to download the image early and on top of that, downloads the wrong version wasting bytes. The good news is that I’ve patched this bug and it should land in Safari TP 45.

In short, using the <picture> and <source type> for mime-type selection is not advisable until the next version of Safari reaches the 90%+ of the user base.

Still, eventually, it'll be quite useful.

Fallbacks for Videos-as-Images is a post from CSS-Tricks

A Short History of WaSP and Why Web Standards Matter

Wed, 02/07/2018 - 14:10

In August of 2013, Aaron Gustafson posted to the WaSP blog. He had a bittersweet message for a community that he had helped lead:

Thanks to the hard work of countless WaSP members and supporters (like you), Tim Berners-Lee’s vision of the web as an open, accessible, and universal community is largely the reality. While there is still work to be done, the sting of the WaSP is no longer necessary. And so it is time for us to close down The Web Standards Project.

If there’s just the slightest hint of wistful regret in Gustafson’s message, it’s because the Web Standards Project changed everything that had become the norm on the web during its 15+ years of service. Through dedication and developer advocacy, they hoisted the web up from a nest of browser incompatibility and meaningless markup to the standardized and feature-rich application platform most of us know today.

I previously covered what it took to bring CSS to the World Wide Web. This is the other side of that story. It was only through the efforts of many volunteers working tirelessly behind the scenes that CSS ever had a chance to become what it is today. They are the reason we have web standards at all.

Introducing Web Standards

Web standards weren't even a thing in 1998. There were HTML and CSS specifications and drafts of recommendations that were managed by the W3C, but they had spotty and uneven browser support which made them little more than words on a page. At the time, web designers stood at the precipice of what would soon be known as the Browser Wars, where Netscape and Microsoft raced to implement exclusive features and add-ons in an escalating fight for market share. Rather than stick to any official specification, these browsers forced designers to support either Netscape Navigator or Internet Explorer. And designers were definitely not happy about it.

Supporting both browsers and their competing feature implementations was possible, but it was also difficult and unreliable, like building a house on sand. To help each other along, many developers began joining mailing lists to swap tips and hacks for dealing with sites that needed to look good no matter where it was rendered.

From these mailing lists, a group began to form around an entirely new idea. The problem, this new group realized, wasn’t with the code, but with the browsers that refused to adhere to the codified, open specifications passed down by the W3C. Browsers touted new presentational HTML elements like the <blink> tag, but they were proprietary and provided no layout options. What the web needed was browsers that could follow the standards of the web.

The group decided they needed to step up and push browsers in the right direction. They called themselves the Web Standards Project. And, since the process would require a bit of a sting, they went by WaSP for short.

Launching the Web Standards Project

In August of 1998, WaSP announced their mission to the public on a brand new website: to "support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all." Within a few hours, 450 people joined WaSP. In a few months, that number would jump to thousands.

WaSP took what was basically a two-pronged approach. The first was in public, tapping into the groundswell of developer support they had gathered to lobby for better standards support in browsers. Using grassroots tactics and targeted outreach, WaSP would often send its members on "missions" such as sending emails to browsers explaining in great detail their troubles working with a lack of consistent web standards support.

They also published scathing reports that put browsers on blast, highlighting all the ways that Netscape or Internet Explorer failed to add necessary support, even go so far to encourage users to use alternative browsers. It was these reports where the project truly lived up to its acronym. One needs to look no further then a quote from WaSP’s savage takedown of Internet Explorer as an example of its ability to sting:

Quit before the job's done, and the flamethrower's the only answer. Because that's our job. We speak for thousands of Web developers, and through them, millions of Web users.

The second prong of WaSP's approach included privately reaching out to passionate developers on browser teams. The problem, for big companies like Netscape and Microsoft, wasn’t that engineers were against web standards. Quite the contrary, actually. Many browser engineers believed deeply in WaSP’s mission but were resisted by perceived business interests and red-tape bureaucracy time and time again. As a result, WaSP would often work with browser developers to find the best path forward and advocate on their behalf to the higher-ups when necessary.

Holding it All Together

To help WaSP navigate its way through its missions, reports, and outreach, a Steering Committee was formed. This committee helped set the project's goals and reached out to the community to gather support. They were the heralds of a better day soon to come, and more than a few influential members would pass through their ranks before the project was over, including: Rachel Cox, Tim Bray, Steve Champeon, Glenn Davis, Glenda Sims, Todd Fahrner, Molly Holzschalg and Aaron Gustafson, among many, many others.

At the top of it all was a project lead who set the tone for the group and gave developers a unified voice. The position was initially held by George Olsen, one of the founders of the project, but was soon picked up by another founding member: Jeffrey Zeldman.

A network of loosely connected satellite groups orbiting around the Steering Committee helped developers and browsers alike understand the importance of web standards. There was, for instance, an Accessibility group that bridged the W3C with browser makers to ensure the web was open and accessible to everyone. Then there was the CSS Samurai, who published reports about CSS support (or, more commonly, lack thereof) in different browsers. They were the ones that devised the Box Acid test and offered guidance to browsers as they worked to expand CSS support. Todd Fahrner, who helped save CSS with doctype switching, counted himself among the CSS Samurai.

Making an Impact

WaSP was huge and growing all the time. Its members were passionate and, little by little, clusters of the community came together to enact change. And that is exactly what happened.

The changes felt kind of small at first but soon they bordered on massive. When Netscape was kicking around the idea of a new rendering engine named Gecko that would include much better standards support across the board, their initial timeline would have taken months to release. But the WaSP swarmed, emailing and reaching out to Netscape to put pressure on them to release Gecko sooner. It worked and, by the next release, Gecko (and better web standards) shipped.

Tantek Çelik was another member of WaSP. The community inspired him to take a stand on web standards at his day job as lead developer of Internet Explorer for Mac. It was through the encouragement and support of WaSP that he and his team released version 5 with full CSS Level 1 support.

Internet Explorer 5 for Mac was released with full CSS Level 1 support

In August of 2001, after years of public reports and private outreach and developer advocacy, the WaSP sting provoked seismic change in Internet Explorer as version 6 released with CSS Level 1 support and the latest HTML features. The upgrades were due in no small part to the work at the Web Standards Project and their work with dedicated members of the browser team. It appeared that standards were beginning to actually win out. The WaSP’s mission may have even been over.

But instead of calling it quits, they shifted tactics a bit.

Teaching Standards to a New Generation

In the early 2000’s, WaSP would radically change its approach to education and developer outreach.

They started with the launch of the Browser Upgrade Campaign which educated users who were coming online for the very first time and knew absolutely nothing about web standards and modern browsers. Site owners were encouraged to add some JavaScript and a banner to their sites to target these users. As a result, those surfing to a site on older versions of standards-compliant browsers, like Firefox or Opera, were greeted by a banner simply directing them to upgrade. Users visiting the site on a really old browser, like pre-IE5 or Netscape 5, would redirect visitors to an entirely new page explaining why upgrading to a modern browser with standards support was in their best interest.

A page from the Browser Upgrade Campaign

WaSP was going to bring the web up to speed, even if they had to do it one person at a time. Perhaps no one articulated this sentiment better than Molly Holzschalg when she wrote "Raise Your Standards" in February 2002. In the article, she broke down what web standards are and what they meant for developers and designers. She celebrated the work that had been done by browsers and the community working to make web standards a thing in the first place.

But, she argued, the web was far from done. It was now time for developers to step up to the plate and assume the responsibility for standards themselves by coding it into all of their sites. She wrote:

The Consortium is fraught with its own internal issues, and its actions—while almost always in the best interests of professional Web authors—are occasionally politicized.

Therefore, as Web authors, we're personally responsible for making implementation decisions within the framework of a site's markup needs. It's our job to administer recommendations to the best of our abilities.

This, however, would not be easy. It would once again require the combined efforts of WaSP members to pull together and teach the web a new way to code. Some began publishing tutorials to their personal blogs or on A List Apart. Others created a standards-based online curriculum for web developers who were new to the field. A few members even formed brand-new task forces to work with popular software tools, like Adobe Dreamweaver, and ensure that standards were supported there as well.

The redesigns of ESPN and Wired, which stood as a testament and example for standards-based designs for years to come, were undertaken in part because members of those teams were inspired by the work that WaSP was doing. They would not have been able to take those crucial first steps if not for the examples and tutorials made freely available to them by gracious WaSP members.

That is why web standards is basically second nature to many web developers today. It’s also why we have such a free spirit of creative exchange in our industry. It all started when WaSP decided to share the correct way of doing things right out in the open.

Looking Past Web Standards

It was this openness that carried WaSP into the late 2010’s. When Holzschlag took over as lead, she advocated for transparency and collaboration between browser makers and the web community. The WaSP, Holzschlag realized, was no longer necessary and could be done from within. For example, she made inroads at Microsoft to help make web standards a top priority on their browser team.

With each subsequent release, browsers began to catch up to the latest standards from the W3C. Browsers like Opera and Firefox actually competed on supporting the latest standards. Google Chrome used web standards as a selling point when it was initially released around the same time. The decade-and-a-half of work by WaSP was paying off. Browser makers were listening to the W3C and the web community, even going so far as to experiment with new standards before they were officially published for recommendation.

In 2013, WaSP posted its farewell announcement and closed up shop for good. It was a difficult decision for those who had fought long and hard for a better, more accessible and more open web, but it was necessary. There are still a number of battlegrounds for the open web but, thanks to the efforts of WaSP, the one for web standards has been won.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.

A Short History of WaSP and Why Web Standards Matter is a post from CSS-Tricks

Counting With CSS Counters and CSS Grid

Tue, 02/06/2018 - 14:21

You’ve heard of CSS Grid, I’m sure of that. It would be hard to miss it considering that the whole front-end developer universe has been raving about it for the past year.

Whether you’re new to Grid or have already spent some time with it, we should start this post with a short definition directly from the words of W3C:

Grid Layout is a new layout model for CSS that has powerful abilities to control the sizing and positioning of boxes and their contents. Unlike Flexible Box Layout, which is single-axis–oriented, Grid Layout is optimized for 2-dimensional layouts: those in which alignment of content is desired in both dimensions.

In my own words, CSS Grid is a mesh of invisible horizontal and vertical lines. We arrange elements in the spaces between those lines to create a desired layout. An easier, stable, and standardized way to structure contents in a web page.

Besides the graph paper foundation, CSS Grid also provides the advantage of a layout model that’s source order independent: irrespective of where a grid item is placed in the source code, it can be positioned anywhere in the grid across both the axes on screen. This is very important, not only for when you’d find it troublesome to update HTML while rearranging elements on page but also at times when you’d find certain source placements being restrictive to layouts.

Although we can always move an element to a desired coordinate on screen using other techniques like translate, position, or margin, they’re both harder to code and to update for situations like building a responsive design, compared to true layout mechanisms like CSS Grid.

In this post, we’re going to demonstrate how we can use the source order independence of CSS Grid to solve a layout issue that’s the result of a source order constraint. Specifically, we're going to look at checkboxes and CSS Counters.

Counting With Checkboxes

If you’ve never used CSS Counters, don’t worry, the concept is pretty simple! We set a counter to count a set of elements at the same DOM level. That counter is incremented in the CSS rules of those individual elements, essentially counting them.

Here’s the code to count checked and unchecked checkboxes:

<input type="checkbox">Checkbox #1<br> <input type="checkbox">Checkbox #2 <!-- more checkboxes, if we want them --> <div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span> </div> ::root { counter-reset: checked-sum, unchecked-sum; } input[type="checkbox"] { counter-increment: unchecked-sum; } input[type="checkbox"]:checked { counter-increment: checked-sum; } .totalUnChecked::after { content: counter(unchecked-sum); } .totalChecked::after { content: counter(checked-sum); }

In the above code, two counters are set at the root element using the counter-reset property and are incremented at their respective rules, one for checked and the other for unchecked checkboxes, using counter-increment. The values of the counters are then shown as contents of two empty <span>s' pseudo elements using counter().

Here's a stripped-down version of what we get with this code:

See the Pen Checkbox + Label Grid by Preethi (@rpsthecoder) on CodePen.

This is pretty cool. We can use it in to-do lists, email inbox interfaces, survey forms, or anywhere where users toggle boxes and will appreciate being shown how many items are checked and how many are unselected. All this with just CSS! Useful, isn’t it?

But the effectiveness of counter() wanes when we realize that an element displaying the total count can only appear after all the elements to be counted in the source code. This is because the browser first needs the chance to count all the elements, before showing the total. Hence, we can’t simply change the markup to place the counters above the checkboxes like this:

<!-- This will not work! --> <div class="total"> <span class="totalChecked"> Total Checked: </span><br> <span class="totalUnChecked"> Total Unchecked: </span> </div> <input type="checkbox">Checkbox #1<br> <input type="checkbox">Checkbox #2

Then, how else can we get the counters above the checkboxes in our layout? This is where CSS Grid and its layout-rendering powers come into play.

Adding Grid

We're basically wrapping the previous HTML in a new <div> element that’ll serve as the grid container:

<div class="grid"> <input type="checkbox">Checkbox #1 <input type="checkbox">Checkbox #2 <input type="checkbox">Checkbox #3 <input type="checkbox">Checkbox #4 <input type="checkbox">Checkbox #5 <input type="checkbox">Checkbox #6 <div class=total> <span class="totalChecked"> Total Checked: </span> <span class="totalUnChecked"> Total Unchecked: </span> </div> </div>

And, here is the CSS for our grid:

.grid { display: grid; /* creates the grid */ grid-template-columns: repeat(2, max-content); /* creates two columns on the grid that are sized based on the content they contain */ } .total { grid-row: 1; /* places the counters on the first row */ grid-column: 1 / 3; /* ensures the counters span the full grid width, forcing other content below */ }

This is what we get as a result (with some additional styling):

See the Pen CSS Counter Grid by Preethi (@rpsthecoder) on CodePen.

See that? The counters are now located above the checkboxes!

We defined two columns on the grid element in the CSS, each accommodating its own content to their maximum size.

When we grid-ify an element, its contents (text including) block-ify, meaning they acquire a grid-level box (similar to block-level box) and are automatically placed in the available grid cells.

In the demo above, the counters take up both the grid cells in the first row as specified, and following that, every checkbox resides in the first column and the text after each checkbox stays in the last column.

The checkboxes are forced below the counters without changing the actual source order!

Since we didn’t change the source order, the counter works and we can see the running total count of checked and unchecked checkboxes at the top the same way we did when they were at the bottom. The functionality is left unaffected!

To be honest, there’s a staggering number of ways to code and implement a CSS Grid. You can use grid line numbers, named grid areas, among many other methods. The more you know about them, the easier it gets and the more useful they become. What we covered here is just the tip of the iceberg and you may find other approaches to create a grid that work equally well (or better).

Counting With CSS Counters and CSS Grid is a post from CSS-Tricks

Web-Powered Augmented Reality: a Hands-On Tutorial

Mon, 02/05/2018 - 21:58

Uri Shaked has written about his journey in AR on the web from the very early days of Google’s Project Tango to the recent A-Frame experiments from Mozilla. Front-end devs might be interested in A-Frame because of how you work with it - it's a declarative language like HTML! I particularly like this section where Uri describes how it felt to first play around with AR:

The ability to place virtual objects in the real space, and have them stick in place even when you move around, seemed to me like we were diving down the uncanny valley, where the boundaries between the physical world and the game were beginning to blur. This was the first time I experienced AR without the need for markers or special props — it just worked out of the box, everywhere.

Direct Link to ArticlePermalink

Web-Powered Augmented Reality: a Hands-On Tutorial is a post from CSS-Tricks

The Best UX is No User Interface at All

Mon, 02/05/2018 - 14:20

I have been obsessed with User Interfaces (UI) for as long as I can remember. I remember marveling at the beauty that was Compaq TabWorks while I played "The Incredible Machine" and listened to "Tears For Fears—Greatest Hits" on the family computer.

Don’t judge me—I was listening to "Mad World" way before Donny Darko and that creepy rabbit. If none of those references landed with you, it’s probably because I’m super old. In the words of George Castanza, "It’s not you, it’s me."

That’s another super old reference you might not get. You know what—forget all that, let’s move on.

I really got into UI when I bought my own computer. I had joined the Coast Guard and saved a bunch of money during boot camp (when you can’t go shopping—you know—because of push-ups and stuff). I wanted to buy a Chevy Cavalier (sadly, that’s not a joke), but my father encouraged me to invest in a computer instead, so I bought a Compaq from Office Depot that came with Windows 98. Also you can’t buy a Cavalier with 800 bucks.

Windows 98

I spent countless hours changing the themes in Windows 98. I was mesmerized by the way windows overlapped and how the icons and fonts would change; the shapes of buttons and the different colors. The slight drop shadow each window had to layer it in space. Each theme was better than the previous theme!

Oh, The depth of the blues! The glory of fish! BREATHTAKING.

If only I had known how much better things were going to get. If only I had known, about Windows XP.

Windows XP

Does love at first sight exist? No—don’t be ridiculous. Love is an extremely complex part of the human condition that can only manifest itself over time through long periods of struggling and the dark night of the soul.

"What is love? Baby don’t hurt me. Don’t hurt me. No more."

—Haddaway, "What Is Love"

But love’s fickle and cruel cousin, Infatuation, does exist and it is almost exclusively available at first sight. I was absolutely infatuated with Windows XP.

The curves on the start menu. The menu animations. I could just look at it for hours. And I did. Shocking fact—I wasn’t exactly in high social demand so I had a great deal of free time to do weird things like stare at an operating system.

For those who remember, Windows XP was extremely customizable. Virtually every part of the operating system could be skinned or themed. This spawned a lot of UI hacking communities and third party tools like Window Blinds from the fine folks at Stardock. I see you Stardock; the north remembers.

I Love UI

I could go on and on about my long, boring and slightly disturbing obsession with UI. Oddly enough, I am not a designer or an artist. I can build a decent UI, but you would not hire me to design your site. Or you would but your name would be "Burke’s Mom."

Awww. Thanks, Mom. I can do 3 images.

I can however assemble great UI if I have the building blocks. I’ve been lucky enough to work on some great UI projects in my career, including being part of the Kendo UI project when it first launched. I love buttons, dropdown lists, and dialogue windows with over the top animation. And I can assemble those parts into an application like Thomas Kinkade. I am the UI assembler of light.

But as a user, one thought has been recurring for me during the past few years: the best user experience is really no user interface at all.

UI is a Necessary Evil

The only reason that a UI even exists is so that users can interact with our systems. It’s a middle-man. It’s an abstracted layer of communication and the conversation is pre-canned. The user and the UI can communicate, but only within the specifically defined boundaries of the interface. And this is how we end up with GLORIOUS UX fails like the one that falsely notified Hawaiian residents this past weekend of an incoming ballistic missile.

This is the screen that set off the ballistic missile alert on Saturday. The operator clicked the PACOM (CDW) State Only link. The drill link is the one that was supposed to be clicked. #Hawaii pic.twitter.com/lDVnqUmyHa

— Honolulu Civil Beat (@CivilBeat) January 16, 2018

We have to anticipate how the user is going to think or react and everyone is different. Well designed systems can get us close to intuitive. I am still a fan of skeumorphic design and "sorry not sorry." If a 4 year old can pick up and use and iPad with no instruction, that’s kind of a feat of UX genius.

That said, even a perfect UI would be less than ideal. The ideal is to have no middleman at all. No translation layer. Historically speaking, this hasn’t been possible because we can’t "speak" to computers.

Until now.

Natural-Language Processing

Natural-language processing (NLP) is the field of computing that deals with language interaction between humans and machines. The most recognizable example of this would be the Amazon Echo, Siri, Cortana or Google. Or "OK Google." Or whatever the heck you call that thing.

I firmly believe that being able to communicate with an AI via spoken language is a better user interaction than a button—every time. To make this case, I would like to give you three examples of how NLP can completely replace a UI and the result is a far better user experience.

Exhibit A: Hey Siri, Remind Me To...

Siri is not a shining example of "a better user experience," but one thing that it does fairly well and the thing I use it for almost every day, is creating reminders.

It is a far better user experience to say "Hey Siri, remind me to email my mom tomorrow morning 9 AM" than it is to do this...

  1. Open the app
  2. Tap a new line
  3. Type out the reminder
  4. Tap the "i"
  5. Select the date
  6. Tap “Done”

No matter how beautiful the Reminders app is, it will never match the UX of just telling Siri to do it.

Now this comes with the disclaimer of, "when it works." Siri frequently just goes to lunch or cuts me off halfway through which results in a nonsensical reminder with no due date. When NLP goes wrong, it tends to go WAY wrong. It’s also incredibly annoying as anyone who as EVER used Siri can attest.

This is a simple example, and one that you might already be aware of or not that impressed with. Fair enough; here’s another: Home Automation.

Exhibit B: Home Automation

I have a bunch of the GE Z-Wave switches installed in my house. I tie them all together with a Vera Controller. If you aren’t big into home automation, just know that the switches connect to the controller and the controller exposes the interface with which to control them, allowing me to turn the lights on and off with my phone.

The Vera app for controlling lights is quite nice. It’s not perfect, but the UX is decent. For instance, if I wanted to turn on the office lights, this is how I would do it using the app.

I said it was "quite nice." Not perfect. I’m just saying I’ve seen worse.

To be honest though, when I want to turn a light on or off, I don’t want to go hunting and pecking through an app on my phone to do it. That is not awesome. I want the light on and I want it on now. Turning lights on and off via your phone is a step backward in usability when compared to, I don’t know, A LIGHT SWITCH?

What is awesome, is telling my Echo to do it.

I can, for any switch in my house, say...

“Alexa, turn on/off the office lights”

Or the bedroom, or the dining room or what have you. Vera has an Alexa skill that allows Alexa to communicate directly with the controller and because Alexa uses NLP, I don’t have to say the phrase exactly right to get it to work. It just works.

Now, there is a slight delay between the time that I finish issuing the command and the time that Alexa responds. I assume this is the latency to go out to the server, execute the skill, call back into my controller, turn off the light, go back out to the skill in the cloud and then back down into my house.

I’m going to be honest and say that I sometimes get irritated that it takes a second or two to turn the lights on. Sure—blah blah blah technical reasons, but I don’t care. I want the lights on and I want them on NOW. Like Veruca Salt.

I also have Nest thermostats which I can control with the Echo and I gotta tell you, being able to adjust your thermostat without even getting out of bed is kind of, well, it's kind of pathetic now that I’ve said it out loud. Never mind. I never ever do that.

NLP doesn’t have to be limited to the spoken word. It turns out that interfacing with computers via text is STILL better than buttons and sliders.

For that, I give you Exhibit C.

Exhibit C: Digit

Digit is a remarkable little service that I discovered via a Twitter ad. You’ve aways wondered who clicks on Twitter ads and now you know.

I wish more people knew about Digit. The basic premise behind the service is that they save money for you automatically each month by running machine learning on your spending habits to figure out where they can save money without sending you into the red.

The most remarkable thing about Digit is that you don’t interface with it via an app. Everything is done via text; and I love it.

Digit texts me every day to give me an update on my bank account balance. This is a nice daily heads up look at my current balance.

Yes, I blurred out my balance. It’s so you don’t get depressed on my behalf.

If I want to know how much Digit has saved for me, I just ask how much is in my savings. But again, because Digit is using NLP, I can ask it however I like. I can even just use the word "savings" and it still works. It’s almost like I’m interfacing with a real person.

Now if I want to transfer some of that back into savings because I want to buy more Lego and my wife says that Lego are a "want" not a "need" and that we should be saving for our kids "college," I can just ask Digit to transfer some money. Again, I don’t have to know exactly what to say. I can interface with Digit until I get the right result. Even If I screw up mid-transaction, Digit can handle it. This is basically me filling out a form via text without the hell that is "filling out a form."

After using Digit via text for so long, I now want to interface with everything via text. Sometimes it’s even better than having to talk out loud, especially if you are in a situation where you can’t just yell something out to a robot, or you can’t be bothered to speak. I have days like that too.

Is UX as We Know it Dead?

No. Emphatically no. NLP is not a substitution for all user interfaces. For instance, I wouldn’t want to text my camera to tell it to take a picture. Or scroll through photos with my voice. It is, however, a new way to think about how we design our user interfaces now that we have this powerful new form of input available.

So, before you design that next form or shopping cart, ask yourself: Do I really even need this UI? There’s a good chance that thanks to NLP and AI/ML, you don’t.

How to Get Started With NLP

NLP is far easier to create and develop than you might think. We’ve come a long way in terms of developer tooling. You can check out the LUIS project from Azure which provides a GUI tool for building and training NLP models.

It’s free and seriously easy.

Here’s a video of me building an AI that can understand when I ask it to turn lights on or off by picking the light state and room location out of an interaction.

The Best UX is No User Interface at All is a post from CSS-Tricks

Website Sameness™

Sun, 02/04/2018 - 14:59

Here's captain obvious (yours truly) with an extra special observation for you:







— Chris Coyier (@chriscoyier) January 30, 2018

It came across as (particularly trite) commentary about Website Sameness™. I suppose it was. I was looking at lots of sites as I was putting together The Power of Serverless. I was actually finding it funny how obtuse the navigation often is on a SaaS sites. Products? Solutions? Which one is for me? Do I need to buy a product and a solution? Sometimes they make me feel dumb, like I'm not informed enough to be a customer. What's the harm is just telling me exactly what your thing does?

But anyway, people commenting on Website Sameness™ has plenty of history onto itself. One of the most memorable stabs was from Jon Gold:

which one of the two possible websites are you currently designing? pic.twitter.com/ZD0uRGTqqm

— Jon Gold (@jongold) February 2, 2016

Dave Ellis has a good one too:

They style itself is now so mainstream that clients ask for it. It’s happened to me, more than once. I’ve created sites that follow the formula. This surely is another reason. If clients are seeing a lot of sites that are the same style, it’s causing them to ask for it.

Mary Collins says Dave's sentiment rang true right away:

Myself, I'm not sure how much I care. If a website fails to do do what it sets out to do, that, I care about. Design is failing there. But if a website has a design that is a bit boring, but does just what everyone needs it to do, that's just fine. All hail boring. Although I admit it's particularly ironic when a design agency's own site feels regurgitated.

My emotional state is likely more intrigued about your business model and envious of your success than eyerolly about your design.

As long as I'm playing armchair devil's advocate, if every website was a complete and total design departure from the next, I imagine that would be worse. To have to-relearn how each new site works means not taking advantages of affordances, which make people productive out of the gate with new experiences.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "0465050654"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "c89ce429a1632ac38bbf6b80d8ee829b";

It's probably fair to say, though, that design uniqueness and affordances need not be at odds. Surely you can design a site that is aesthetically unique, yet people still know how to use the dropdown menus.

There has been a lot of scapegoats for Website Sameness™ over the years. The popularity of frameworks. Flat design as a trend. Performance holding back creativity. User expectations. Research telling us that our existing patterns work. The fact that what websites are all largely trying to do the same things. Even responsive design is a popular whipping boy. We might as throw style guides / pattern libraries on the heap.

So again, I'm not sure how much I care. Partially because of these two things:

  • Designers have all the tools they need to make websites as unique as they like.
  • There is an awful lot of money in websites, and an awful lot of people trying to get their hands on it.

If design uniqueness was a lever you could pull for increased success for any type of business, you'd better believe it would be pulled a lot more often.

Website Sameness™ is a post from CSS-Tricks

Sketching in the Browser

Sat, 02/03/2018 - 15:21

Mark Dalgleish details how his team at seek tried to build a library of React components that could then be translated into Sketch documents. Why is that important though? Well, Mark describes the problems that his team faced like this:

...most design systems still have a fundamental flaw. Designers and developers continue to work in entirely different mediums. As a result, without constant, manual effort to keep them in sync, our code and design assets are constantly drifting further and further apart.

For companies working with design systems, it seems our industry is stuck with design tools that are essentially built for the wrong medium—completely unable to feed our development work back into the next round of design.

Mark then describes how his team went ahead and open-sourced html-sketchapp-cli, a command line tool for converting HTML documents into Sketch components. The idea is that this will ultimately save everyone from having to effectively copy and paste styles from the React components back to Sketch and vice-versa.

Looks like this is the second major stab at the React to Sketch. The last one that went around was AirBnB's React Sketch.app. We normally think of the end result of design tooling being the code, so it's fascinating to see people finding newfound value in moving the other direction.

Direct Link to ArticlePermalink

Sketching in the Browser is a post from CSS-Tricks

Using Conic Gradients and CSS Variables to Create a Doughnut Chart Output for a Range Input

Fri, 02/02/2018 - 15:00

I recently came across this Pen and my first thought was that it could all be done with just three elements: a wrapper, a range input and an output. On the CSS side, this involves using a conic-gradient() with a stop set to a CSS variable.

The result we want to reproduce.

In mid 2015, Lea Verou unveiled a polyfill for conic-gradient() during a conference talk where she demoed how they can be used for creating pie charts. This polyfill is great for getting started to play with conic-gradient(), as it allows us to use them to build stuff that works across the board. Sadly, it doesn't work with CSS variables and CSS variables have become a key component of writing efficient code these days.

The good news is that things have moved a bit over the past two years and a half. Chrome and, in general, browsers using Blink that expose flags (like Opera for example) now support conic-gradient() natively (yay!), which means it has become possible to experiment with CSS variables as conic-gradient() stops. All we need to do is have the Experimental Web Platform Features flag enabled in chrome://flags (or, if you're using Opera, opera://flags):

The Experimental Web Platform Features flag enabled in Chrome.

Alright, now we can get started!

The Initial Structure

We start with a wrapper element and a range input:

<div class="wrap"> <input id="r" type="range"/> </div>

Note that we don't have an output element there. This is because we need JavaScript to update the value of the output element anyway and we don't want to see an ugly useless non-updating element if the JavaScript is disabled or fails for some reason. So we add this element via JavaScript and also, based on whether the current browser supports conic-gradient() or not, we add a class on the wrapper to signal that.

If our browser supports conic-gradient(), the wrapper gets a class of .full and we style the output into a chart. Otherwise, we just have a simple slider without a chart, the output being on the slider thumb.

The result in browsers supporting conic-gradient() (top) and the fallback in browsers not supporting it (bottom). Basic Styles

Before anything else, we want to show a nice-looking slider on the screen in all browsers.

We start with the most basic reset possible and set a background on the body:

$bg: #3d3d4a; * { margin: 0 } body { background: $bg }

The second step is to prepare the slider for styling in WebKit browsers by setting -webkit-appearance: none on it and on its thumb (because the track already has it set by default for some reason) and we make sure we level the field by explicitly setting the properties that are inconsistent across browsers like padding, background or font:

[type='range'] { &, &::-webkit-slider-thumb { -webkit-appearance: none } display: block; padding: 0; background: transparent; font: inherit }

If you need a refresher on how sliders and their components work in various browsers, check out my detailed article on understanding the range input.

We can now move on to the more interesting part. We decide upon the dimensions of the track and thumb and set these on the slider components via the corresponding mixins. We'll also include some background values so that we have something visible on the screen as well as a border-radius to prettify things. For both components, we also reset the border to none so that we have consistent results across the board.

$k: .1; $track-w: 25em; $track-h: .02*$track-w; $thumb-d: $k*$track-w; @mixin track() { border: none; width: $track-w; height: $track-h; border-radius: .5*$track-h; background: #343440 } @mixin thumb() { border: none; width: $thumb-d; height: $thumb-d; border-radius: 50%; background: #e6323e } [type='range'] { /* same styles as before */ width: $track-w; height: $thumb-d; &::-webkit-slider-runnable-track { @include track } &::-moz-range-track { @include track } &::-ms-track { @include track } &::-webkit-slider-thumb { margin-top: .5*($track-h - $thumb-d); @include thumb } &::-moz-range-thumb { @include thumb } &::-ms-thumb { margin-top: 0; @include thumb } }

We add a few more touches like setting a margin, an explicit width and a font on the wrapper:

.wrap { margin: 2em auto; width: $track-w; font: 2vmin trebuchet ms, arial, sans-serif }

We don't want to let this get too small or too big, so we limit the font-size:

.wrap { @media (max-width: 500px), (max-height: 500px) { font-size: 10px } @media (min-width: 1600px), (min-height: 1600px) { font-size: 32px } }

And we now have a nice cross-browser slider:

See the Pen by thebabydino (@thebabydino) on CodePen.

The JavaScript

We start by getting the slider and the wrapper and creating the output element.

const _R = document.getElementById('r'), _W = _R.parentNode, _O = document.createElement('output');

We create a variable val where we store the current value of our range input:

let val = null;

Next, we have an update() function that checks whether the current slider value is equal to the one we have already stored. If that's not the case, we update the JavaScript val variable, the text content of the output and the CSS variable --val on the wrapper.

function update() { let newval = +_R.value; if(val !== newval) _W.style.setProperty('--val', _O.value = val = newval) };

Before we move further with the JavaScript, we set a conic-gradient() on the output from the CSS:

output { background: conic-gradient(#e64c65 calc(var(--val)*1%), #41a8ab 0%) }

We put things in motion by calling the update() function, adding the output to the DOM as a child of the wrapper and then testing whether the computed background-image of the output is the conic-gradient() we have set or not (note that we need to add it to the DOM before we do this).

If the computed background-image is not "none" (as it is the case if we have no native conic-gradient() support), then we add a full class on the wrapper. We also connect the output to the range input via a for attribute.

Via event listeners, we ensure the update() function is called every time we move the slider thumb.

_O.setAttribute('for', _R.id); update(); _W.appendChild(_O); if(getComputedStyle(_O).backgroundImage !== 'none') _W.classList.add('full'); _R.addEventListener('input', update, false); _R.addEventListener('change', update, false);

We now have a slider and an output (that shows its value on a variable conic-gradient() background if we're viewing it in a browser with native conic-gradient() support). Still ugly at this stage, but it's functional—the output updates when we drag the slider:

See the Pen by thebabydino (@thebabydino) on CodePen.

We've also given the output a light color value so that we can see it better and added a % at the end via the ::after pseudo-element. We've also hidden the tooltip (::-ms-tooltip) in Edge by setting its display to none.

The no chart case

This is the case when we don't have conic-gradient() support so we don't have a chart. The result we're aiming for can be seen below:

The result we want to reproduce. Prettifying the Output

In this case, we absolutely position the output element, make it take the dimensions of the thumb and put its text right in the middle:

.wrap:not(.full) { position: relative; output { position: absolute; /* ensure it starts from the top */ top: 0; /* set dimensions */ width: $thumb-d; height: $thumb-d } } /* we'll be using this for the chart case too */ output { /* place text in the middle */ display: flex; align-items: center; justify-content: center; }

If you need a refresher on how align-items and justify-content work, check out this comprehensive article on CSS alignment by Patrick Brosset.

The result can be seen in the following Pen, where we've also set an outline in order to clearly see the boundaries of the output:

See the Pen by thebabydino (@thebabydino) on CodePen.

This is starting to look like something, but our output isn't moving with the slider thumb.

Making the Output Move

In order to fix this problem, let's first remember how the motion of a slider thumb works. In Chrome, the border-box of the thumb moves within the limits of the track's content-box, while in Firefox and Edge, the thumb's border-box moves within the limits of the actual slider's content-box.

While this inconsistency may cause problems in some situations, our use case here is a simple one. We don't have margins, paddings or borders on the slider or on its components, so the three boxes (content-box, padding-box and border-box) coincide with both the slider itself and its track and thumb components. Furthermore, the width of the three boxes of the actual input coincides with the width of the three boxes of its track.

This means that when the slider value is at its minimum (which we haven't set explicitly, so it's the default 0), the left edge of the thumb's boxes coincides with the left edge of the input (and with that of the track).

Also, when the slider value is at its maximum (again, not set explicitly, so it takes the default value 100), the right edge of the thumb's boxes coincides with the right edge of the input (and with that of the track). This puts the left edge of the thumb one thumb width ($thumb-d) before (to the left of) the right edge of the slider (and of the track).

The following illustration shows this relative to the input width ($track-w)—this is shown to be 1. The thumb width ($thumb-d) is shown as a fraction k of the input width (since we've set it as $thumb-d: $k*$track-w).

The slider thumb at the minimum value and at the maximum value (live).

From here, we get that the left edge of the thumb has moved by an input width ($track-w) minus a thumb width (thumb-d) in between the minimum and the maximum.

In order to move the output the same way, we use a translation. In its initial position, our output is at the leftmost position of the thumb, the one occupied when the slider value is at its minimum, so the transform we use is translate(0). To move it into the position occupied by the thumb when the slider value is at its maximum, we need to translate it by $track-w - $thumb-d = $track-w*(1 - $k).

The range of motion for the slider thumb and, consequently, the output (live).

Alright, but what about the values in between?

Well, remember that every time the slider value gets updated, we're not only setting the new value to the output's text content, but also to a CSS variable --val on the wrapper. This CSS variable goes between 0 at the left end (when the slider value is its minimum, 0 in this case) and 100 at the other end (when the slider value is its maximum, 100 in this case).

So if we translate our output along the horizontal (x) axis by calc(var(--val)/100*#{$track-w - $thumb-d}), this moves it along with the thumb without us needing to do anything else:

See the Pen by thebabydino (@thebabydino) on CodePen.

Note how the above works if we click elsewhere on the track, but not if we try to drag the thumb. This is because the output now sits on top of the thumb and catches our clicks instead.

We fix this problem by setting pointer-events: none on the output.

See the Pen by thebabydino (@thebabydino) on CodePen.

In the demo above, we have also removed the ugly outline on the output element as we don't need it anymore.

Now that we have a nice fallback for browsers that don't support conic-gradient() natively, we can move on to building the result we want for those that do (Chrome/ Opera with flag enabled).

The Chart Case Drawing the Desired Layout

Before we start writing any code, we need to clearly know what we're trying to achieve. In order to do that, we do a layout sketch with dimensions relative to the track width ($track-w), which is also the width of the input and the edge of the wrapper's content-box (wrapper padding not included).

This means the content-box of our wrapper is a square of edge 1 (relative to the track width), the input is a rectangle having one edge along and equal to an edge of the wrapper and the other one a fraction k of the same edge, while its thumb is a kxk square.

The desired layout in the chart case (live).

The chart is a square of edge 1 - 2·k, touching the wrapper edge opposite to the slider, a k gap away from the slider and in the middle along the other direction. Given that the edge of the wrapper is 1 and that of the chart is 1 - 2·k, it results we have k gaps between the edges of the wrapper and those of the chart along this direction as well.

Sizing Our Elements

The first step towards getting this layout is making the wrapper square and setting the dimensions of the output to (1 - 2*$k)*100%:

$k: .1; $track-w: 25em; $chart-d: (1 - 2*$k)*100%; .wrap.full { width: $track-w; output { width: $chart-d; height: $chart-d } }

The result can be seen below, where we've also added some outlines to see things better:

The result in a first stage (live demo, only if we have native conic-gradient() support).

This is a good start, as the output is already in the exact spot we want it to be.

Making the Slider Vertical

The "official" way of doing this for WebKit browsers is by setting -webkit-appearance: vertical on the range input. However, this would break the custom styles as they require us to have -webkit-appearance set to none and we cannot have it set to two different values at the same time.

So the only convenient solution we have is to use a transform. As it is, we have the minimum of the slider at the left end of the wrapper and the maximum at its right end. What we want is to have the minimum at the bottom of the wrapper and the maximum at the top of the wrapper.

The initial position of the slider vs. the final position we want to bring it to (live).

This sounds like a 90° rotation in the negative direction (as the clockwise direction is the positive one) around the top right corner (which gives us a transform-origin that's at 100% horizontally and 0% vertically).

See the Pen by thebabydino (@thebabydino) on CodePen.

That's a good start, but now our slider is outside the wrapper boundary. In order to decide what's the best next step to bring it inside in the desired position, we need to understand what this rotation has done. Not only has it rotated the actual input element, but it has also rotated its local system of coordinates. Now its x axis points up and its y axis points to the right.

So in order to bring it inside, along the right edge of the wrapper, we need to translate it by its own height in the negative direction of its y axis after the rotation. This means the final transform chain we apply is rotate(-90deg) translatey(-100%). (Remember that % values used in translate() functions are relative to the dimensions of the translated element itself.)

.wrap.full { input { transform-origin: 100% 0; transform: rotate(-90deg) translatey(-100%) } }

This gives us the desired layout:

The result in a second stage (live demo, only if we have native conic-gradient() support). Styling the Chart

Of course, the first step is to make it round with border-radius and tweak the color, font-size and font-weight properties.

.wrap.full { output { border-radius: 50%; color: #7a7a7a; font-size: 4.25em; font-weight: 700 } }

You may have noticed we've set the dimensions of the chart as (1 - 2*$k)*100% instead of (1 - 2*$k)*$track-w. This is because $track-w is an em value, meaning that the computed pixel equivalent depends on the font-size of the element that uses it.

However, we wanted to be able to increase the font-size here without having to tweak down an em-valued size. This is possible and not that complicated, but compared to just setting the dimensions as % values that don't depend on the font-size, it's still a bit of extra work.

The result in a third stage (live demo, only if we have native conic-gradient() support). From Pie &#x1f967; to Doughnut &#x1f369;

The simplest way to emulate that hole in the middle where we have the text is to add another background layer on top of the conic-gradient() one. We could probably add some blend modes to do the trick, but that's not really necessary unless we have an image background. For a solid background as we have here, a simple cover layer will do.

$p: 39%; background: radial-gradient($bg $p, transparent $p + .5% /* avoid ugly edge */), conic-gradient(#e64c65 calc(var(--val)*1%), #41a8ab 0%);

Alright, this does it for the chart itself!

The result in a fourth stage (live demo, only if we have native conic-gradient() support). Showing the Value on the Thumb

We do this with an absolutely positioned ::after pseudo-element on the wrapper. We give this pseudo-element the dimensions of the thumb and position it in the bottom right corner of the wrapper, precisely where the thumb is when the slider value is at its minimum.

.wrap.full { position: relative; &::after { position: absolute; right: 0; bottom: 0; width: $thumb-d; height: $thumb-d; content: ''; } }

We also give it an outline just so that we can see it.

The result in a fifth stage (live demo, only if we have native conic-gradient() support).

Moving it along with the thumb is achieved exactly the same as in the no chart case, except this time the translation happens along the y axis in the negative direction (instead of along the x axis in the positive direction).

transform: translatey(calc(var(--val)/-100*#{$track-w - $thumb-d}))

In order to be able to drag the thumb underneath, we have to also set pointer-events: none on this pseudo-element. The result can be seen below—dragging the thumb also moves the wrapper's ::before pseudo-element.

The result in a sixth stage (live demo, only if we have native conic-gradient() support).

Alright, but what we really want here is to display the current value using this pseudo-element. Setting its content property to var(--val) does nothing, as --val is a number value, not a string. If we were to set it as a string, we could use it as a value for content, but then we couldn't use it for calc() anymore.

Fortunately, we can get around this problem with a neat trick using CSS counters:

counter-reset: val var(--val); content: counter(val)'%';

Now the whole thing is functional, yay!

The result in a seventh stage (live demo, only if we have native conic-gradient() support).

So let's move on to prettifying and adding some nice touches. We put the text in the middle of the thumb, we make it white, we get rid of all the outlines and we set cursor: pointer on the input:

.wrap.full { &::after { line-height: $thumb-d; color: #fff; text-align: center } } [type='range'] { /* same as before */ cursor: pointer }

This gives us the following nice result:

The final look for the chart case (live demo, only if we have native conic-gradient() support). Eliminating Repetition

One thing that's nagging me is the fact that we have a bunch of common styles on the output in the no chart case and on the .wrap:after in the chart case.

Styles on the output in the no chart case vs. styles on the .wrap:after in the chart case.

We can do something about this and that's using a silent class we then extend:

%thumb-val { position: absolute; width: $thumb-d; height: $thumb-d; color: #fff; pointer-events: none } .wrap { &:not(.full) output { @extend %thumb-val; /* same other styles */ } &:after { @extend %thumb-val; /* same other styles */ } } Nice Focus Styles

Let's say we don't want to have that ugly outline on :focus, but we also want to clearly differentiate this state visually. So what could we do? Well, let's say we make the thumb smaller and a bit desaturated when the input isn't focused and that we also hide the text in this case.

Sounds like a cool idea...but, since we have no parent selector, we cannot trigger a property change on the ::after of the slider's parent when the slider gets or loses focus. Ugh.

What we can do instead is use the output's other pseudo-element (the ::before) to display the value on the thumb. This doesn't come without any of its own complications, which we'll discuss in a moment, but it allows us to do something like this:

[type='range']:focus + output:before { /* focus styles */ }

The problem with taking this approach is that we're blowing up the font on the output itself, but for its ::before we need it to be the same size and weight as on the wrapper.

We can solve this by setting a relative font size as a Sass variable $fsr and then use that value to blow up the font on the actual output and bring it back down to its previous size on the output:before pseudo-element.

$fsr: 4; .wrap { color: $fg; &.full { output { font-size: $fsr*1em; &:before { /* same styles as we had on .wrap:after */ font-size: 1em/$fsr; font-weight: 200; } } } }

Other than that, we just move the CSS declarations we had on the .wrap:after on the output:before.

Styles on the wrapper pseudo-element vs. on the output pseudo-element.

Alright, now we can move on to the final step of differentiating between the normal and the focused look.

We start by hiding the ugly :focus state outline and the value on the thumb when the slider isn't focused:

%thumb-val { /* same styles as before */ opacity: 0; } [type='range']:focus { outline: none; .wrap:not(.full) & + output, .wrap.full & + output:before { opacity: 1 } } The value on the thumb is only visible when the slider gets focus (live demo, only if we have native conic-gradient() support).

Next, we set different styles for the normal and focused states of the slider thumb:

@mixin thumb() { /* same styles as before */ transform: scale(.7); filter: saturate(.7) } @mixin thumb-focus() { transform: none; filter: none } [type='range']:focus { /* same as before */ &::-webkit-slider-thumb { @include thumb-focus } &::-moz-range-thumb { @include thumb-focus } &::-ms-thumb { @include thumb-focus } } The thumb is scaled down and desaturated as long as the slider isn't focused (live demo, only if we have native conic-gradient() support).

The last step is to add a transition between these states:

$t: .5s; @mixin thumb() { /* same styles as before */ transition: transform $t linear, filter $t } %thumb-val { /* same styles as before */ transition: opacity $t ease-in-out } The demo with a transition between the normal and focused state (live demo, only if we have native conic-gradient() support). What About Screen Readers?

Since screen readers these days read generated content, we'd have the % value read twice in this case. So we go around this by setting role='img' on our output and then putting the current value that we want to be read in an aria-label attribute:

let conic = false; function update() { let newval = +_R.value; if(val !== newval) { _W.style.setProperty('--val', _O.value = val = newval); if(conic) _O.setAttribute('aria-label', `${val}%`) } }; update(); _O.setAttribute('for', _R.id); _W.appendChild(_O); if(getComputedStyle(_O).backgroundImage !== 'none') { conic = true; _W.classList.add('full'); _O.setAttribute('role', 'img'); _O.setAttribute('aria-label', `${val}%`) }

The final demo can be found below. Note that we only see the fallback if your browser has no native conic-gradient() support

See the Pen by thebabydino (@thebabydino) on CodePen.

Final Words

While the browser support for this is still poor, the situation will change. For now it's just Blink browsers that expose flags, but Safari lists conic-gradient() as being in development, so things are already getting better.

If you'd like cross-browser support to become a reality sooner rather than later, you can contribute by voting for conic-gradient() implementation in Edge or by leaving a comment on this Firefox bug on why you think this is important or what use cases you have in mind. here are mine for inspiration.

Using Conic Gradients and CSS Variables to Create a Doughnut Chart Output for a Range Input is a post from CSS-Tricks

Recreating the GitHub Contribution Graph with CSS Grid Layout

Fri, 02/02/2018 - 14:10

Ire Aderinokun sets out to build the GitHub contribution graph — that’s the table with lots of green squares indicating how much you’ve contributed to a project – with CSS Grid:

As I always find while working with CSS Grid Layout, I end up with far less CSS than I would have using almost any other method. In this case, the layout-related part of my CSS ended up being less than 30 lines, with only 15 declarations!

I’m so excited about posts like this because it shows just how much fun CSS Grid can be. Likewise, Jules Forrest has been making a number of brilliant experiments on this front where she reimagines complex print layouts or even peculiar menu designs.

Direct Link to ArticlePermalink

Recreating the GitHub Contribution Graph with CSS Grid Layout is a post from CSS-Tricks

JavaScript, I love you, you’re perfect, now change

Thu, 02/01/2018 - 14:19

Those of us who celebrate Christmas or Hannukkah probably have strong memories of the excitement of December. Do you remember the months leading up to Christmas, when your imagination exploded with ideas, answers to the big question "What do you want for Christmas?" As a kid, because you aren't bogged down by adult responsibility and even the bounds of reality, the list could range anywhere from "legos" to "a trip to the moon" (which is seeming like will be more likely in years to come).

Thinking outside of an accepted base premise—the confines of what we know something to be—can be a useful mental exercise. I love JavaScript, for instance, but what if, like Christmas as a kid, I could just decide what it could be? There are small tweaks to the syntax that would not change my life, but make it just that much better. Let's take a look.

As my coworker and friend Brian Holt says,

Get out your paintbrushes! Today, we're bikeshedding!

Template Literals

First off, I should say, template literals were quite possibly my favorite thing about ES6. As someone who regularly manipulates SVG path strings, moving from string concatenation to template literals quite literally changed my damn life. Check out the return of this function:

function newWobble(rate, startX) { ... if (i % 2 === 0) { pathArr2[i] = pathArr2[i] + " Q " + in1 + " " + QRate; } else { pathArr2[i] = pathArr2[i] + " Q " + in2 + " " + QRate; } ... return "M" + pathArr2.join("") + " " + startX + " " + (inc * (rate*2) + rate); }

Instead becomes

const newWobble = (rate, startX) => { ... if (i % 2 === 0) { pathArr2[i] = `${pathArr2[i]} Q ${in1} ${QRate}`; } else { pathArr2[i] = `${pathArr2[i]} Q ${in2} ${QRate}`; } ... return `M${pathArr2.join("")} ${startX} ${(inc * (rate*2) + rate)}`; }

...which is much easier to read and work with. But could this be improved? Of course it can!

There is a small bit of cognitive load incurred when we have to parse ${x}, mostly due to the very nature of the characters themselves. So, what if template literals lost the dollar sign and moved to square brackets instead? Rather than:

return `M${pathArr2.join("")} ${startX} ${(inc * (rate*2) + rate)}`

...we can have something like:

return `M[pathArr2.join("")] [startX] [(inc * (rate*2) + rate)]`

...which is much more streamlined.

Ternary operators

Ternary operators are interesting because in recent years, they have not changed, but we did. A lot of modern JavaScript makes heavy use of ternaries, which causes me to revisit their syntax as it stands now.

For instance, a one-liner like:

const func = function( .. ) { return condition1 ? value1 : value2 }

...is not so hard to read and grok. But here’s what I’ve been reading a lot lately:

const func = function( .. ) { return condition1 ? value1 : condition2 ? value2 : condition3 ? value3 : value4 }

This is much harder to read, mostly because the colon : gets lost depending on your code editor and syntax highlighting settings. And, what if someone isn’t properly formatting that code? It can easily become:

const func = function( .. ) { return condition1 ? value1 : condition2 ? value2 : condition3 ? value3 : value4 }

...in which case the colons are extremely hard to see at a glance. So what if we used a visual indicator that was a little stronger?

const func = function( .. ) { return condition1 ? value1 | condition2 ? value2 | condition3 ? value3 | value4 }

A pipe doesn’t break up the flow, yet still separates in a way that is not as easy to get lost in the line.

Arrow Functions

I’m going to have a mob after me for this one because it’s everyone’s favorite, but arrow functions were always a miss for me. Not because they aren’t useful—quite the opposite. Arrow functions are wonderful! But there was always something about the legibility of that fat arrow that irked me. I am used to them now, but it troubled me that when I was first learning them, it took me an extra second or two to read them. Eventually this passed, but let’s pretend we can have our cake and eat it too.

I am definitely not suggesting that we still use the word function. In fact, I would love it if arrow functions weren’t anonymous by nature because:

const foo = (y) => { const x return x + y }

...is not quite as elegant as:

const foo(y) => { const x return x + y }

In my perfect world, we would drop the function and the arrow so that we could have something that resembles more of a method:

foo(y) { const x return x + y }

and an anonymous function could simply be:

(y) { const x return x + y }

Or even a one liner:

(y) { y += 1 }

I know many people will bring up the fact that:

  1. arrow functions have one-liners that do this, and
  2. I disliked the curly brackets in the template literals above

The reason I like this is that:

  1. some encapsulation can provide clarity, especially for logic, and
  2. curly brackets are a stronger visual signal, because they're more visual noise. Functions are important enough to need that sort of high-level visual status, whereas template literals are not.

OK, now let’s go one step deeper. What if we always had an implicit return on the last line? So, now we could do:

foo(y) { const x x + y }


(y) { const x x + y }

If we didn’t want to return, we could still say:

foo(y) { const x x + y return }

Or, better yet, use a special character:

foo(y) { const x x + y ^ }

This way, anytime you wanted to return a different line instead of the last, you could use return and it would work just as normal:

foo(y) { const x return x + y const z }

What a world it could be, eh?

What Now?

People invent new languages and rewrite compilers for the very reason of having a strong opinion on how a language should pivot or even how it should be written at all. Some of my favorite examples of this include whitespace, which is a programming language created from all tabs and spaces, and Malbolge, which was specifically designed to be impossible to program with. (If you think I'm a troll for writing this article, I got nuthin' on the guy who wrote Malbolge.) From the article:

Indeed, the author himself has never written a single Malbolge program

For those more serious about wanting to develop their own programming language, there are resources available to you, and it's pretty interesting to learn.

I realize that there are reasons JavaScript can't make these changes. This article is not intended to be a TC39 proposal, it's merely a thought exercise. It's fun to reimagine the things you see as immovable to check your own assumptions about base premises. Necessity might be the mother of invention, but play is its father.

Many thanks to Brian Holt and Kent C. Dodds for indulging me and proofing this article.

JavaScript, I love you, you’re perfect, now change is a post from CSS-Tricks

Take a coding quiz, get offers from top tech companies

Thu, 02/01/2018 - 14:18

(This is a sponsored post.)

That's how TripleByte works. The companies that find hires from TripleByte (like Dropbox, Apple, Reddit, Twitch, etc) don't have as many underqualified applicants to sort through because they've come through a technical interview of sorts already.

Direct Link to ArticlePermalink

Take a coding quiz, get offers from top tech companies is a post from CSS-Tricks

Aspect Ratios with SVG

Thu, 02/01/2018 - 14:16

I quite like this little trick from Noam Rosenthal:

<style> .aspectRatioSizer { display: grid; } .aspectRatioSizer > * { grid-area: 1 / 1 / 2 / 2; } </style> <div class="aspectRatioSizer"> <svg viewBox="0 0 7 2"></svg> <div> Content goes here </div> </div>

Two things going on there:

  1. As soon as you give a <svg> a viewBox, it goes full-width, but only as tall as the implied aspect ratio in the viewBox value. The viewBox value is essentially "top, left, width, height" for the coordinate system interally to the SVG, but it has the side-effect of sizing the element itself when it has no height of its own. That's what is used to "push" the parent element into an apsect ratio as well. The parent will still stretch if it has to (e.g. more content than fits), which is good.
  2. CSS Grid is used to place both elements on top of each other, and the source order keeps the content on top.

Direct Link to ArticlePermalink

Aspect Ratios with SVG is a post from CSS-Tricks

A Site About Serverless Technology

Wed, 01/31/2018 - 20:47

I know some of you have a visceral and negative feeling toward the word serverless. I felt that way at first too, but I'm kinda over it. Even if it's not a perfect word, it's done a good job of encapsulating a movement into a single word. That movement is far more than I'm qualified to explain.

But I do very much think it's worth knowing about. Developers of all sorts can take advantage of it, but I'm particularly fascinated about what it can do to extend the front-end developer toolbelt.

I made a website called The Power of Serverless for Front-End Developers just for this! Rather than re-hash what is says there here, I'll just send you there:

The site offers an introduction to why I find it compelling, a list of ideas that might fit nicely into a front-end developers wheelhouse who is looking to expand what they know how to do, and a growing list of resources.

Perhaps the most useful feature of the site is a big ol' list of services that fit into the serverless bucket. You've got options!

Direct Link to ArticlePermalink

A Site About Serverless Technology is a post from CSS-Tricks

Designer-Oriented Styles

Wed, 01/31/2018 - 18:00

James Kyle:

Components are a designer’s bread and butter. Designers have been building design systems with some model of “component” for a really long time. As the web has matured, from Atomic Design to Sketch Symbols, “components” (in some form or another) have asserted themselves as a best practice for web designers ...

Designers don’t care about selectors or #TheCascade. They might make use of it since it’s available, but #TheCascade never comes up in the design thought process.

(Okay okay... most designers. You're special. But we both knew that already.)

I think James makes strong points here. I'm, predictably, in the camp in which I like CSS. I don't find it particularly hard or troublesome. Yet, I don't think in CSS when designing. Much easier to think (and work) in components, nesting them as needed. If the developer flow matched that, that's cool.

I also agree with Sarah Federman who chimed in on Twitter:

It seems a bit premature to look at the current landscape of component CSS tooling at say that it's designer-friendly.

The whole conversation is worth reading, ending with:

Tooling that treats component design as an interface with the code is where it's at/going to be. Hopefully, designers will be more empowered to create component styles when we can meet them closer to their comfort zone.

Direct Link to ArticlePermalink

Designer-Oriented Styles is a post from CSS-Tricks

Building a Good Download… Button?

Wed, 01/31/2018 - 14:42

The semantics inherent in HTML elements tell us what we’re supposed to use them for. Need a heading? You’ll want a heading element. Want a paragraph? Our trusty friend <p> is here, loyal as ever. Want a download? Well, you’re going to want... hmm.

What best describes a download? Is it a triggered action, and therefore should be in the domain of the <button> element? Or is it a destination, and therefore best described using an <a> element?

Buttons Do Things, Links Go Places

There seems to be a lot of confusion over when to use buttons and when to use links. Much like tabs versus spaces or pullover hoodies versus zip-ups, this debate might rage without end.

However, the W3C provides us with an important clue as to who is right: the download attribute.

The What Now?

The internet as we know it couldn’t exist without links. They form the Semantic Web, the terribly wonderful, wonderfully terrible tangled ball of information that enables you to read this article at this very moment.

Like all other elements, anchor links can be modified by HTML’s global attributes. Anchor link elements also possess a number of unique attributes that help control how they connect to other documents and files.

One of those attributes is called download. It tells the browser that the destination of the link should be saved to your device instead of visiting it. You’re still "navigating" to the file, only instead of viewing it, you’re snagging a copy for your own use.

Any kind of file can be a download! This even includes HTML, something the browser would typically display. The presence of the attribute is effectively a human-authored flag that tells the browser to skip trying to render something it has retrieved and just store it instead:

<a download href="recipe.html"> Download recipe </a>

This raises a very important point: we can’t know every user’s reason for why they’re visiting our website, but we can use the tools made available to us to help guide them along their way. If that means storing an HTML document for use offline, we’re empowered to help make the experience as easy as possible.

Other Evidence

Still not convinced? Here’s some more food for thought:

Progressive Enhancement

JavaScript is more brittle than we care to admit. <a> elements function even if JavaScript breaks. Using anchors for your download means that a person can access what they need, even in suboptimal situations.

A robust solution is always the most desirable—in a time of crisis, it might even save a life. This might sound hyperbolic, but having a stable copy of something that works offline could make all the difference in a time of need.

Semantics and Accessibility

My friend Scott, who is paid to know these kinds of things, tells us:

The debate about whether a button or link should be used to download a file is a bit silly, as the whole purpose of a link has always been to download content. HTML is a file, and like all other files, it needs to be retrieved from a server and downloaded before it can be presented to a user.

The difference between a Photoshop file, HTML, and other understood media files, is that a browser automatically displays the latter two. If one were to link to a Photoshop .psd file, the browser would initiate a document change to render the file, likely be all like, "lol wut?" and then just initiate the OS download prompt.

The confusion seems to come from developers getting super literal with the "links go places, buttons perform actions." Yes, that is true, but links don’t actually go anywhere. They retrieve information and download it. Buttons perform actions, but they don’t inherently "get" documents. While they can be used to get data, it’s often to change state of a current document, not to retrieve and render a new one. They can get data, in regards to the functionality of forms, but it continues to be within the context of updating a web document, not downloading an individual file.

Long story short, the download attribute is unique to anchor links for a reason. download augments the inherent functionality of the link retrieving data. It side steps the attempt to render the file in the browser and instead says, "You know what? I’m just going to save this for later..."

Thanks, Scott!

Designing a Good Download Link

The default experience of downloading a file can be jarring—it typically isn’t part of our normal browsing behavior. The user has to shift their mental model from flitting from page-to-page and filling out forms to navigating a file system and extracting compressed archives. For less technologically-savvy individuals, it can be a disorienting and frustrating context shift.

As responsible designers and developers, we want to make the experience of interacting with a download link as good as it possibly can be. As we can’t modify how the browser’s download behavior itself operates, we should make the surrounding user experience as transparent and streamlined as possible.

Tell Me What’s in Store

Give the user as much information as you can to help inform them on what’s about to happen. Anticipating and answering the following questions can help:


Verb plus noun is the winning combination. Describe what the link does and what it gets you:

<a download href="downloads/fonts.zip"> Download Fonts </a>

By itself, the verb Download would only signal what behavior will be triggered when the link is activated. Including the noun Fonts is great for removing ambiguity about what you’ll be getting.

In cases where there’s multiple download links on a page, the presence of the noun will help users navigating via screen reader. Here’s what it would sound like if you were browsing a page that had eight noun-less download links:

Do you know which one of those eight links gets you what you want? No? That’s not great. Similarly, the presence of the download attribute on an <a> element won’t be announced by screen readers, so the verb is equally vital. It’s important to provide context!

Remember that obvious always wins. While it is possible to make your compliance checks happy by using a visually hidden CSS class to hide the noun portion of your download, it places extra cognitive burden on your users. A hidden noun also sacrifices functionality like the browser’s search-on-page capability.

How Long?

KB, MB, GB, TB. We’re not talking about Kobe Beef, Mega Bloks, Ginkgo Biloba, or Tuberculosis. We’re talking about the size of the download, and consequently how long it will take for the download to finish.

Know your audience. A file with a size of 128 KB will download much faster than a file with a size of 2 GB, yet its number is drastically larger. Unless your audience has familiarity with the terminology used to describe file size, they may not understand what they’re getting themselves into if you only tell them the size of the payload.

For larger files, the wait time can be especially problematic. A standard download is an all-or-nothing affair—interruptions can corrupt them and render them useless. Worse, it can waste valuable data on a metered data plan, an unfortunately all-too-relevant concern.

It is also incredibly difficult to accurately ascertain someone’s current connection speed. While the Network Information API looks promising, current browser support isn’t so hot.

There is hidden nuance living in the gap between reported and actual connection. A high speed 5G connection could drop the second someone enters a tunnel or walks to a spot in their house where coverage isn’t good. This isn’t even beginning to cover the complexities involved with throttling, an unfortunately all-too-real concern.

To address these issues, apply a little micro-copy:

See the Pen Download movie by Eric Bailey (@ericwbailey) on CodePen.

Your user is going to know the particulars of their connection quality better than you ever will. Now they have what they need to make an informed decision, with a little intentional ambiguity to temper expectations.

But what about progress bars?

Progress bars are UI elements that show how close a computational task is to completion. For UX designers, they’re a staple (and an opportunity to play around with perceived performance). However, I’m wary of employing them when it comes to downloads.

At best, they’re redundant. Browsers already supply UI to indicate how the download is progressing. At worst, they’re a confusing liability. Adding them introduces unnecessary implementation and maintenance complications—especially when combined with the issues in determining connection speed and quality outlined earlier.


Sell the user on why they should care. Will it remove frustration by fixing an existing problem? Will it increase enjoyment by adding a new feature? Will it reassure by making things more secure? While not every download needs the "why?" question answered, it is good to have for payloads with a complicated or esoteric purpose.

If I am downloading router firmware, I may not understand (or care about) the nitty-gritty of what the update does behind the scenes. However, some high-level communication about why I need to undertake the endeavor will go a long way.

See the Pen Describe what the download does by Eric Bailey (@ericwbailey) on CodePen.

What Next?

Instructions on what to do after the download has completed could be useful. Again, knowing your audience is key.

With our router example, it is entirely possible that less technically-savvy individuals will find themselves on the product support page. It’s also highly possible that they’re in a distressed emotional state when they arrive. After a download has been initiated, step-by-step information on how to install the new firmware, as well as links to relevant support resources could go a long way to alleviating negative feelings.

This is practical empathy. Anticipating the user’s needs and emotional state and preemptively offering solutions has a direct effect on things like reducing expensive support calls. These savings means organizational resources can be reallocated to other important endeavors.

Taking it to Code Signal That it’s Different

Links use underlines.

A good practice from both a user experience and an accessibility perspective is to create a distinction between internal and external links. This means creating an indicator that a link does something other than take you to another place on your website or webapp. For links that go off-site, a common practice is to use an arrow breaking out of a box. For downloads, a downward-facing arrow is the de facto standard.

Examples of icons for dowmnloads (top) and external links (bottom). Courtesy of Noun Project.

Some may feel that the presence of the download attribute is redundant when applied to links the browser already knows to store. I disagree.

In addition to being an unambiguous semantic marker in the HTML, the download attribute can serve as a simple and elegant styling hook. CSS attribute selectors—code that lets us create styling based on the qualities that help describe HTML elements—allow us to target any link that is a download and style it without having to attach a special class:

a[download] { color: hsla(216, 70%, 53%, 1); text-decoration: underline; } a[download]::before { content: url('../icons/icon-download.svg'); height: 1em; position: relative; top: 0.75em; right: 0.5em; width: 1em; } a[download]:hover, a[download]:focus { text-decoration: none; }

Combined with the text describing the download, the presence of the icon clearly communicates that when you activate this link, download behavior will follow. It also provides extra target area, great for touch devices.

Targeting both the presence of the download attribute and the file extension at the end of the string in the href attribute allows us to get even fancier. We can take advantage of the cascade to set up a consistent treatment for all icons, but change the icon itself on a per-filetype basis. This is great for situations where there are multiple kinds of things you can download on a single page:

See the Pen Download icons by Eric Bailey (@ericwbailey) on CodePen.

I maintain a list of common filetypes, if you’re looking for a starting point. Remember to only include the selectors you need, so as to not create unnecessary bloat in your production CSS. If your website or webapp features many icons and/or a lot of fancy state management, consider using a SVG icon system instead. It will improve performance—just remember to make it accessible!

Name the Payload

The download attribute can accept an optional value, allowing the author to create a custom, human-friendly name for the downloaded file. In this example, we’re changing the name of a podcast episode to something that makes sense when downloaded to the user’s device, while maintaining something that makes sense to the podcast’s producer:

<a download="Pod-People-Podcast-Episode-12-Feed-Me-Seymour.mp3" href="https://www.dropbox.com/s/txf5933cwxhv1so6/12-final-v5-RADIO-EDIT.m4a?dl=0"> Download Episode 12 </a>

For complicated sites, this attribute allows us to create downloads that make sense to the person requesting them, while also taking advantage of features like CDNs and dynamically-generated files. Not a lot of complicated backend sorcery here, just a little template logic:

<a href="https://s3-us-west-2.amazonaws.com/ky22o6s6g8be80bak577b17e34bb93cex3.pdf" download="{{ user.name | slugify }}-{{ 'now' | date: "%Y" }}-tax-return.pdf"> Download your {{ 'now' | date: "%Y" }} Tax Return </a> Material Honesty

Keeping content looking and behaving like the HTML elements used to describe it is great for reinforcing external consistency. Externally consistent content is great for ensuring people can, and will use your website or webapp. Use is great for engagement, a metric that makes business-types happy.

And yet, link-y buttons and button-y links are everywhere.

We can lay blame for this semantic drift squarely at the feet of trend. Designers and developers eager to try the latest and greatest invite ambiguity in with outstretched arms. Leadership chases perceived value to stay relevant.

It doesn’t have to be this way. Websites can be both beautiful and accessible. Semantics and current frameworks/aesthetics aren’t mutually exclusive. Take a little time to review the fundamentals—you just might discover something simple that helps everyone get what they need with just a little bit less fuss.

Building a Good Download… Button? is a post from CSS-Tricks

Boilerform: A Follow-Up

Tue, 01/30/2018 - 14:53

When Chris wrote his idea for a Boilerform, I had already been thinking about starting a new project. I’d just decided to put my front-end boilerplate to bed, and wanted something new to think about. Chris’ idea struck a chord with me immediately, so I got enthusiastically involved in the comments like an excitable puppy. That excitement led me to go ahead and build out the initial version of Boilerform, which you can check out here.

The reason for my initial excitement was that I have a guilty pleasure for forms. In various jobs, I’ve worked with forms at a pretty intense level and have learned a lot about them. This has ranged from building dynamic form builders to high-level spam protection for a Harley-Davidson® website platform. Each different project has given me a look at the front-end and back-end of the process. Each of these projects has also picked away at my tolerance for quick, lazy implementations of forms, because I’ve seen the drastic implementations of this at scale.

But hey, we’re not bad people. Forms are a nightmare to work with. Although better now: each browser treats them slightly differently. For example, check out these select menus from a selection of browsers and OSs. Not one of them looks the same.

These are just the tip of the inconsistency iceberg.

Because of these inconsistencies, it’s easy to see why developers bail out of digging too deep or just spin up a copy of Bootstrap and be done with it. Also, in my experience, the design of minor forms, such as a contact form are left until later in the project when most of the positive momentum has already gone. I’ve even been guilty of building contact forms a day before a website's launch. &#x1f62c;

There’s clearly an opportunity to make the process of working with forms—on the front-end, at least—better and I couldn’t resist the temptation to make it!

The Planning

I sat and thought about what pain-points there are when working with forms and what annoys me as a user of forms. I decided that as a developer, I hate styling forms. As a user, poorly implemented form fields annoy me.

An example of the latter is email fields. Now, if you try to fill in an email field on an iOS device, you get that annoying trait of the first letter being capitalized by the browser, because it treats it like a sentence. All you have to do to stop that behaviour is add autocapitalize="none" to your field and this stops. I know this isn’t commonly known because I rarely see it in place, but it’s such a quick win to have a positive impact on your users.

I wanted to bake these little tricks right into Boilerform to help developers make a user’s life easier. Creating a front-end boilerplate or framework is about so much more than styling and aesthetics. It’s about sharing your gained experience with others to make the landscape better as a whole.

The Specification

I needed to think about what I wanted Boilerform to do as a minimum viable product, at initial launch. I came up with the following rules:

  • It had to be compatible with most front-ends
  • It had to be well documented
  • It had to be lightweight
  • Someone should be able to drop a CDN link to their <head> and have it just work
  • Someone should also be able to expand on the source for their own projects
  • It shouldn’t be too opinionated

To achieve these points, I had some technology decisions to make. I decided to go for a low barrier-to-entry setup. This was:

  • Sass powered CSS
  • BEM
  • Plain ol’ HTML
  • A basic compilation setup

I also focused my attention on samples. CodePen was the natural fit for this because they embed really well. Users can also fork them and play with them themselves.

The last decision was to roll out a pattern library to break up components into little pieces. This helped me in a couple of ways. It helped with organization mainly—but it also helped me build Boilerform in a bitty, sporadic nature as I was working on it in the evenings.

I had my plan and my stack, so got cracking.

Keeping it simple

It’s easy for a project like this to get out of hand, so it’s useful to create some points about what Boilerform will be and also what it won’t be.

What Boilerform will be:

  • It’ll always be a boilerplate to get you off to a good start with your project
  • It’ll provide high-level help with HTML, CSS and JavaScript to make both developers' and users' lives easier
  • It’ll aim to be super lightweight, so it doesn’t become a heavy burden
  • It'll offer configurable options that make it flexible and easy to mould into most web projects

What Boilerform won’t be:

  • It won’t be a silver bullet for your forms—it’ll still need some work
  • It won’t be a framework like Bootstrap or Foundation, because it’ll always be a starting point
  • It won’t be overly opinionated with its CSS and JavaScript
  • It’ll never be aimed at one particular framework or web technology
The Specifics

I know y’all like to dive in to the specifics of how things work, so let me give you a whistle-stop tour!

Namespacing the CSS

The first thing I got sorted was namespacing. I’ve worked on a multitude of different sites and setups and they all share something when it comes to CSS: conflicts. With this in mind, I wrote a @mixin that wrapped all the CSS in a .boilerform namespace.

// Source Sass .c-button { @include namespace() { background: gray; } } // This compiles to this with Sass: .boilerform .c-button { background: gray; }

The mixin is basic right now, but it gives us flexibility to scale. If we wanted to make the namespacing optional down-the-line, we only have to update this mixin. I love that sort of modularity.

Right now, what it does give us is safety. Nothing leaks out of Boilerform and hopefully, whatever leaks in will be handled by the namespaced resets and rules.

BEM With a Garnish of Prefixes

I love BEM. It’s been core to my CSS and markup for a few years now. One thing I love about BEM is that it helps you build small, encapsulated components. This is perfect for a project like Boilerform.

I could probably target naked elements safely because of the namespacing, but BEM is about more than just putting classes on everything. It gives me and others the freedom to write whatever markup structure we want. It’s also really easy for someone to pickup the code and understand what’s related to what, in both HTML and CSS.

Another thing I added to this setup was a component prefix. Instead of an .input-field component, we’ve got a .c-input-field component. I hope little things like that will help a new contributor see what’s a component right off the bat.

Horror Inputs Get Some Cool Styling

As mentioned above, select menus are awful to style. So are radio buttons and checkboxes.

A trick I’ve been using for a while now is abstracting the styling to other friendlier HTML elements. For example, with <select> elements, I wrap them in a .c-select-field component and use siblings to add a consistent caret.

For checkboxes and radio buttons, I visually-hide the main input and use adjacent <label> elements to display state change. Using this approach makes working with these controls so much easier. Importantly, we maintain accessibility and native events too.

Base Attributes to Make Fields Easier to Use

I touched on it above with my example about email fields and capitalization, but that wasn’t the only addition of useful attributes.

  • Search fields have autocorrect="off" on them to prevent browsers trying to fix spelling. I strongly recommend that you add this to inputs that a user inserts their name into as well.
  • Number fields have min, max and step attributes set to help with validation. It’s also great for keyboard users.
  • All fields have blank name and id attributes to hopefully speed up the wiring-up process

I’m certainly keen for this to be expanded on, because little tweaks like this are great for user experience.

Going Forward. Can You Help?

Boilerform is in a good place right now, but it has real potential to be useful. Some ideas I’ve had for its ongoing development are:

  • Introducing multiple JavaScript library integrations, such as React, Vue, and Angular
  • Create some base form layouts in the pattern library
  • Create Sass mixins for styling pesky stuff like placeholders
  • Improve configurability
  • Add new elements such as the range input
  • Create multilingual documentation

As you can see, that's a lot of work, so it would be awesome if we can get some contributors into the project to make something truly useful for our community. Pulling in contributors with different areas of expertise and backgrounds will help us make it useful for as many people as possible, from end-users to back-end developers.

Let’s make something great together. &#x1f642;

Check out the project site or the GitHub repository.

Boilerform: A Follow-Up is a post from CSS-Tricks

People Writing About Style Guides

Tue, 01/30/2018 - 14:51

Are you thinking about style guides lately? It seems to me it couldn't be a hotter topic these days. I'm delighted to see it, as someone who was trying to think and build this way when the prevailing wisdom was nice thought, but these never work. I suspect it's threefold why style guides and design systems have taken off:

  1. Component-based front-end architectures becoming very popular
  2. Styling philosophies that scope styles becoming very popular
  3. A shift in community attitude that style guides work

That last one feels akin to cryptocurrency to me. If everyone believes in the value, it works. If people stop believing in the value, it dies.

Anyway, in my typical Coffee-and-RSS mornings, I've come across quite a few recently written articles on style guides, so I figured I'd round them up for your enjoyment.

How to Build a Design System with a Small Team by Naema Baskanderi:

As a small team working on B2B enterprise software, we were diving into creating a design system with limited time, budget and resources ... Where do you start when you don’t have enough resources, time or budget?

Her five tips feel about right to me:

  1. Don’t start from scratch
  2. Know what you’re working with (an audit)
  3. Build as you go
  4. Know your limits
  5. Stay organized

Style guide-driven design systems by Brad Frost:

I’ll often have teams stand up the style guide website on Day 1 of their design system initiative. A style guide serves as the storefront that showcases all of the design system’s ingredients and serves as a tangible center of mass for the whole endeavor.

This Also published their style guide (Here's 100's of others, if you like peaking at other people's take on this kind of thing).

What is notable about this to me is that it's the closest to the meaning of style guide to me (as opposed to a pattern library or design system that are more about design instructions for building out parts of the website). They only include the three things that are most important to their brand: typography, writing, and identity. Smart.

Everything you write should be easy to understand. Clarity of writing usually follows clarity of thought. Take time to think about what you’re going to say, then say it as simply as possible. Keep these rules in mind whenever you’re writing on behalf of the studio.

Laying the foundations for system design by Andrew Couldwell:

I use the term ‘foundations’ as part of a hierarchy for design systems and thinking. Think of the foundations as digital brand guidelines. They inspire and dovetail into our design systems, guiding all our digital products.

  • At a brand level they cover things like values, identity, tone of voice, photography, illustration, colours and typography.
  • At a digital level they cover things like formatting, localization, calls to action, responsive design and accessibility.
  • And in design systems they are the basis of, and cover the application of, things like text styles, form inputs, buttons and responsive grids.

Again a step back and wider view. Yes, a design system, but one that works alongside brand values.

How to create a living style guide by Adriana De La Cuadra:

Similar to a standard style guide, a living style guide provides a set of standards for the use and creation of styles for an application. In the case of a standard style guide, the purpose is to maintain brand cohesiveness and prevent the misuse of graphics and design elements. In the same way LSGs are used to maintain consistency in an application and to guide their implementation. But what makes a LSG different and more powerful is that much of its information comes right from the source code

An easy first reaction might be: Of course our style guide is "living", we aren't setting out to build a dead style guide. But I think it's an interesting distinction to make. Style guides can sit in your development process in different places, as I wrote a few years back.

It's all to easy to make a style guide that sits on the sidelines or is "the exhaust" of the process. It's different entirely to place your style guide smack in the middle of a development workflow and not allow any sidestepping.

Lastly, Punit Web rounds up some very recently published style guides, in case you're particularly interested in fresh ones you perhaps haven't seen before.

People Writing About Style Guides is a post from CSS-Tricks