Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 8 hours 5 min ago

The Critical Request

Tue, 08/01/2017 - 14:17

Serving a website seems pretty simple: Send some HTML, the browser figures out what resources to load next. Then we wait patiently for the page to be ready.

Little may you know, a lot is going on under the hood.

Have you ever wondered how browser figures out which assets should be requested and in what order?

Today we're going to take a look at how we can use resource priorities to improve the speed of delivery.

Resource priority at work

Most browsers parse HTML using a streaming parser—assets are discovered within the markup before it has been fully delivered. As assets are found, they're added to a network queue along with a predetermined priority.

In Chrome today, there are a number of asset prioritization levels: Very Low, Low, Medium, High and Very high. Peeking into Chrome DevTools source shows that these are aliased to slightly different labels: Lowest, Low, Medium, High and Highest.

To see how your site is prioritizing requests, you can enable a priority column in the Chrome DevTools network request table.

If you're using Safari Technology preview, the (new!) Priority column can be shown in exactly the same way.

Show the Priority column by right clicking any of the request table headings.

You'll also find the priority for a given request in the Performance tab.

Resource timings and priorities are shown on hover
How does Chrome prioritize resources?

Each resource type (CSS, JavaScript, fonts, etc.) has their own set of rules that dictate how they'll be prioritized. What follows is a non-exhaustive list of network priority plans:

HTML— Highest priority.

Styles—Highest priority. Stylesheets that are referenced using an @import directive will also be prioritized Highest, but they'll be queued after blocking scripts.

Images are the only assets that can vary priority based on viewport heuristics. All images start with a priority of Low but will be upgraded to Medium priority when to be rendered within the visible viewport. Images outside the viewport (also known as "below the fold") will remain at Low priority.

During the process of researching this article, I discovered (with the help of Paul Irish) that Chrome DevTools are currently misreporting images that have been upgraded to Medium as Low priority. Paul wrote up a bug report, which you can track here.

If you're interested in reading the Chrome source that handles the image priority upgrade, start with UpdateAllImageResourcePriorities and ComputeResourcePriority.

Ajax/XHR/fetch()—High priority.

Scripts follow a complex loading prioritization scheme. (Jake Archibald wrote about this in detail during 2013. If you want to know the science behind it, I suggest you grab a cuppa and dive in). The TL;DR version is:

  • Scripts loaded using <script src="name.js"></script> will be prioritized as High if they appear in the markup before an image.
  • Scripts loaded using <script src="name.js"></script> will be prioritized as Medium if they're appear in the markup after an image.
  • Scripts that use async or defer attributes will be prioritized as Low.
  • Scripts using type="module" will be prioritized as Low.

Fonts are a bit of a strange beast; they're hugely important resources (who else loves the "'I see it!", "Now it's gone", "Whoa, a new font!" game?), so it makes sense that fonts are downloaded at the Highest priority.

Unfortunately, most @font-face rules are found within an external stylesheet (loaded using something like: <link rel="stylesheet" href="file.css">). This means that web fonts are usually delayed until after the stylesheet has downloaded.

Even if your CSS file references a @font-face font, it will not be requested until it is used on a selector and that selector matches an element on the page. If you've built a single page app that doesn't render any text until it renders, you're delaying the fonts even further.

What makes a request critical?

Most websites effectively ask the browser to load everything for the page to be fully rendered, there is no concrete concept of "above the fold".

Back in the day, browsers wouldn't make more than 6 simultaneous requests per domain — people hacked around this by using assets-1.domain.tld, assets-2.domain.tld hosts to increase the number of asynchronous downloads but failed to recognize that there would be a DNS hit and TCP connection for each new domain and asset.

While this approach had some merits, many of us didn't understand the full impacts and certainly didn't have good quality browser developer tools to confirm these experiments.

Thankfully today, we have great tools at our disposal. Using CNN as an example, let's identify assets that are absolutely required for the viewport to be visually ready (also known as useful to a someone trying to read it).

The user critical content is the masthead, and leading article.

There's really only 5 things that are necessary to display this screen (and not all of them need to be loaded before the site is usable):

  • Most importantly, the HTML. If all else fails the user can still read the page.
  • CSS
  • The logo (A PNG background-image placed by CSS. This could probably be an inline SVG).
  • 4(!) web font weights.
  • The leading article image.

These assets (note the lack of JavaScript) are essential to the visuals that make up the main viewport of the page. These assets are the ones that should be loaded first.

Diving into the performance panel in Chrome shows us that around 50 requests are made before the fonts and leading image are requested.

CNN.com becomes fully rendered somewhere around the 9s mark. This was recorded using a 4G connection, with reasonably spotty connectivity.

There's a clear mismatch between the requests that are required for viewing and the requests that are being made.

Controlling resource priorities

Now that we've defined what critical requests are, we can start to prioritize them using a few simple, powerful tweaks.

Preload (<link rel="preload" href="font.woff" />) instructs the browser to add font.woff to the browser's download queue at a "High" priority.

Note: as="font" is the reason why font.woff would be downloaded as High priority — It's a font, so it follows the priority plan discussed earlier in the "How does Chrome prioritise resources?" section.

Essentially, you're telling the browser: You might not know it yet, but we're going to need this.

This is perfect for those critical requests that we identified earlier. Web fonts can nearly always be categorized as absolutely critical, but there are some fundamental issues with how fonts are discovered and loaded:

  • We wait for the CSS to be loaded, parsed and applied before @font-face rules are discovered.
  • A font is not added to the browser's network queue until it matches up its CSS rules to the DOM via the selectors.
  • This selector matching occurs during style recalculation. It doesn't necessarily happen immediately after download. It can be delayed if the main thread is busy.

In most cases, fonts are delayed by a number of seconds, just because we're not instructing the browser to download them in a timely fashion.

On a mobile device with a slow CPU, laggy connectivity, and without a properly-constructed fallback this can be an absolute deal breaker.

Preload in action: fonts

I ran two tests against calibreapp.com. On the first run, I'd changed nothing about the site at all. On the second, I added these two tags:

<link rel="preload" as="font" href="" type="font/woff2" crossorigin /> <link rel="preload" as="font" href="" type="font/woff2" crossorigin />

Below, you'll see a visual comparison of the rendering of these two tests. The results are quite staggering:

The page rendered 3.5 seconds faster when the fonts were preloaded.

Bottom: Fonts are preloaded — The site finishes rendering in 5 seconds on a "emerging markets" 3G connection.

<link rel="preload"> also accepts a media="" attribute, which will selectively prioritize resources based on @media query rules:

<link rel="preload" href="article-lead-sm.jpg" as="image" type="image/jpeg" media="only screen and (max-width: 48rem)">

Here, we're able to preload a particular image for small screen devices. Perfect for that "main hero image".

As demonstrated above, a simple audit and a couple of tags later and we've vastly improved the delivery & render phase. Super.

Getting tough on web fonts

69% of sites use web fonts, and unfortunately, they're providing a sub-par experience in most cases. They appear, then disappear, then appear again, change weights and jolt the page around during the render sequence.

Frankly, this sucks on almost every level.

As you've seen above, controlling the request order and priority of fonts has a massive effect on render speed. It's clear that we should be looking to prioritize web font requests in most cases.

We can make further improvements using the CSS font-display property. allows us to control how fonts display during the process of web fonts being requested and loaded.

There are 4 options at your disposal, but I'd suggest using font-display: swap;, which will show the fallback font until the web font has loaded—at which point it'll be replaced.

Given a font stack like this:

body { font-family: Calibre, Helvetica, Arial; }

The browser will display Helvetica (or Arial, if you don't have Helvetica) until the Calibre font has loaded. Right now, Chrome and Opera are the only browsers that support font-display, but it's a step forward, and there's no reason to not use it starting today.

Keeping on top of page performance

As you're well aware, websites are never "complete". There are always improvements to be made and it can feel overwhelming, quickly.

Calibre is an automated tool for auditing performance, accessibility and web best practices, it'll help you stay on top of things.

As you've seen above, there are a few metrics that are key to understanding user performance.

  • First paint, tells us when the browser goes from "nothing to something".
  • First meaningful paint, tells us when the browser has "rendered something useful".
  • Finally, First Interactive will tell you when the page has fully rendered, and the JavaScript main thread has settled (low CPU activity for a number of seconds).
Here, we set a budget on CNN’s "First meaningful paint" for <5 seconds.

You can set budgets against all these key user experience metrics. When those budgets are exceeded (or met) your team will be notified by Slack, email or wherever you like.

Calibre displays network request priority, so you can be sure of the requests being made. Tweak priorities and improve performance.

I hope that you've learned some valuable skills to audit requests and their priorities, as well as sparked a few ideas to experiment with to make meaningful performance improvements.

Your critical request checklist:
  • ✅ Enable the Priority column in Chrome DevTools.
  • ✅ Decide which requests must be made before users can see a fully rendered page.
  • ✅ Reduce the number of required critical requests where possible.
  • ✅ Use for assets that will probably be used on the next page of the site.
  • ✅ Use [Link: <https://example.com/other/styles.css>; rel=preload; as=style](https://www.w3.org/TR/preload/) nopush HTTP headers to tell the browser which resources to preload before the HTML has been fully delivered.
  • &#x1f6ab; HTTP/2 Server push is thorny. Probably avoid it. (See this informative document by Tom Bergan, Simon Pelchat and Michael Buettner, as well as Jake Archibald's "HTTP/2 Push is tougher than I thought")
  • ✅ Use font-display: swap; with web fonts where possible.
  • ⏱ Are web fonts being used? Can they be removed?
    If no: Prioritise them and use WOFF2!
  • ⏱ Is a late loading script delaying your single page app from displaying anything at all?
  • &#x1f4f9; Check this great free screencast by Front End Center that shows how load webfonts with the best possible fallback experience.
  • &#x1f50d; View chrome://net-internals/#events and load a page — this logs network related events.
  • No request is faster than no request. ✌️

The Critical Request is a post from CSS-Tricks

A Personal Journey to Fix a Grunt File Permissions issue

Mon, 07/31/2017 - 12:34

I was working on a personal project this past week and got a weird error when I tried to compile my Sass files. Unfortunately, I did not screenshot the exact error, but was something along the lines of this:

Failed to write to location cssmin: style.css EACCES

That EACCES code threw me for a loop but I was able to deduce that it was a file permissions issue, based on the error description saying it was unable to write the file and from some quick StackOverflow searching.

I couldn't find a lot of answers for how to fix this, so I made an attempt to fix it myself. Many of the forum threads I found suggested it could a permissions issue with Node, so I went through the painful exercise of reinstalling it and only made things worse because then Terminal could not even find the npm or grunt tasks.

I finally had to reach out to a buddy (thanks, Neal!) to sort things out. I thought I'd chronicle the process he took me through in case it helps other people down the road.

I'll preface all this by saying I am working on a Mac. That said, I doubt that everything covered here will be universally applicable for all platforms.

Reinstalling Node

Before I talked to Neal, I went ahead and reinstalled Node via the installer package on nodejs.org.

I was told that my permissions issue will likely come up again on future projects until I take the step to re-install Node yet again using a package manager. Some people use Homebrew, though I've seen forums caution against it for various reasons. Node Version Manager appears to be a solid option because it's designed specifically for managing multiple Node versions on the same machine, which makes it easy for future installations and version switching on projects.

Setting a Path to Node Modules

With Node installed, I expected I could point Terminal to my project:

cd /path/to/my/project/folder

...then install the Node modules I put in the project's package.json file:

npm install

Instead, I got this:

npm: command not found

Crap. Didn't I just reinstall Node?

Turns out Terminal (or more correctly, my system in general) was making an assumption about where Node was installed and needed to be told where to look.

I resolved this by opening the system's .bash_profile file and adding the path there. You can get all geeky and open the file up with Terminal using something like:

touch .bash_profile open .bask_profile

Or do what I did and simply open it with your code editor. It's an invisible file, but most code editors show those in the folder panel. The file should be located in your user directory, which is something like Macintosh HD > Users > Your User Folder.

This is what I added to ensure Node was being referenced in the correct location:

export PATH="$HOME/.npm-packages/bin:$HOME/.node/bin:$PATH"

That's basically telling the system to go look in the .npm-packages/bin/ directory for Node commands. In fact, If I browse into that folder, I do indeed see my NPM and Grunt commands, both of which Terminal previously could not locate.

With that little snippet in hand and saved to .bash_profile, I was stoked when I tried to install my Node modules again:

cd /path/to/my/project/folder npm install

...and saw everything load up as expected!

Updating Permissions

My commands were up and running again, but I was still getting the same EACCES error that I started with at the beginning of this post.

The issue was that the project folders were originally created as the Root user, probably when I was in Terminal using sudo for something else. As a result, running my Grunt tasks with my user account was not enough to give Grunt permission to write files to those folders.

I changed directory to the folder where Grunt was trying to write my style.css file, then checked the permissions of that folder:

cd /path/of/the/parent/directory ls -al

That displays the permissions for all of the files and folders in that directory. It showed me that the permissions for the folder were indeed not set for my user account to write files to the folder. I changed directory to that specific folder and used the following command to make myself the owner (replacing username with my username, of course):

sudo chown -R username:staff

I went to run my Grunt tasks again and, voila, everything ran, compiled and saved as I hoped.

In Summary

I know the issues I faced here may either be unique to me or of my own making, but I hope this helps someone else. As a web designer, the command line can be daunting or even a little scary to me, even though I feel I'm expected to be familiar with tools that rely on it.

It's great that we have community-driven sites like StackOverflow when we need them. At the same time, there is is often no substitute for reaching out for personal help.

A Personal Journey to Fix a Grunt File Permissions issue is a post from CSS-Tricks

Designing Between Ranges

Mon, 07/31/2017 - 12:33

CSS is written slowly, line by line, and with each passing moment, we limit the space of what’s possible for our design system. Take this example:

body { font-family: 'Proxima-Nova', sans-serif; color: #333; }

With just a couple of lines of CSS we’ve set the defaults for our entire codebase. Most elements (like paragraphs, lists and plain ol’ divs) will inherit these instructions. But what we rarely think about when we write CSS is that from here on out we’ll have to continuously override these rules if we want another typeface or color. And it’s those overrides that cause a lot of issues in terms of maintenance and scalability. And it’s those issues that cost us heartbreak, time and money.

In the example above that’s probably not an issue but I think in general we don’t recognize the true strength and dangers of the cascade and what it means to our design systems; most folks think that the cascade is designed to let us overwrite previous instructions but I would warn against that. Every time we override a style that is inherited we are making our codebase vastly more complex. How many hours have you spent inspecting an element and scrolled through each rule in order to understand why an element looks the way it does or how the long chain of inheritance has messed up your styles completely?

I think the reason for this is because we often don’t set the proper default styles up for our elements. And when we want to change the style of an element, instead of taking the time to question and refactor those default styles, we simply override them – making matters worse.

So I’ve been thinking a lot about how we ought to make a codebase that incentivizes us all to write better code and how to design our CSS so that it doesn’t encourage other people to make hacky classes to override things.

One of my favorite posts on this subject was written earlier this month by Brandon Smith where he describes the ways in which CSS can become so complicated (emphasis mine):

...never be more explicit than you need to be. Web pages are responsive by default. Writing good CSS means leveraging that fact instead of overriding it. Use percentages or viewport units instead of a media query if possible. Use min-width instead of width where you can. Think in terms of rules, in terms of what you really mean to say, instead of just adding properties until things look right. Try to get a feel for how the browser resolves layout and sizing, and make your changes and additions on top of that judiciously. Work with CSS, instead of against it.

One of Brandon’s suggestions is to avoid using the width property altogether and to stick to max-width or min-width instead. Throughout that post he reminds us that whatever we’re building should sit between ranges, between one of two extremes. So here’s an example of a class with a bad default style:

.button { width: 300px; }

Is that button likely to be that width always, everywhere? On mobile? In every media query? In every state? I highly doubt it and in fact, I believe that this class is just the sort that’s begging to be overwritten with yet another class:

.button-thats-just-a-bit-bigger { width: 310px; }

That’s a silly example but I’ve seen code like this in a lot of teams – and it’s not because the person was writing bad code but because the system encouraged them to write bad code. Everyone is on a deadline for shipping a product and most folks don’t have the time or the inclination to dive into a large codebase and refactor those core default styles I mentioned. It’s easier to just keep writing CSS because it solves all of their problems immediately.

Along these lines, Jeremy Keith wrote about this issue just the other day:

Unlike a programming language that requires knowledge of loops, variables, and other concepts, CSS is pretty easy to pick up. Maybe it’s because of this that it has gained the reputation of being simple. It is simple in the sense of “not complex”, but that doesn’t mean it’s easy. Mistaking “simple” for “easy” will only lead to heartache.

I think that’s what’s happened with some programmers coming to CSS for the first time. They’ve heard it’s simple, so they assume it’s easy. But then when they try to use it, it doesn’t work. It must be the fault of the language because they know that they are smart, and this is supposed to be easy. So they blame the language. They say it’s broken. And so they try to “fix” it by making it conform to a more programmatic way of thinking.

I can’t help but think that they would be less frustrated if they would accept that CSS is not easy. Simple, yes, but not easy.

The reason why CSS is simple is because of the cascade, but it’s not as easy as we might first think because of the cascade, too. It’s those default styles that filter down into everything that is the biggest strength and greatest weakness of the language. And I don’t think JavaScript-based solutions will help with that as much as everyone argues to the contrary.

So what’s the solution? I think that by designing in a range, thinking between states and looking at each and every line of CSS as a suggestion instead of an absolute, unbreakable law is the first step forward. What’s the step after that? Container queries. Container queries. Container queries.

Designing Between Ranges is a post from CSS-Tricks

What is Timeless Web Design?

Fri, 07/28/2017 - 13:20

Let's say you took on a client, and they wanted something very specific from you. They wanted a website that without any changes at all, would still look good in 10 years.

Turns out, when you pose this question to a bunch of web designers and developers, the responses are hugely variant!

The Bring It On Crowd

There are certain folks who see this as an intreguing challenge and would relish the opportunity.


With glee.

— Jeremy Keith (@adactio) July 27, 2017

Accept the challenge &#x1f44c;

— Dick (@michaeldick) July 27, 2017

Totally deliver &#x1f44d;

— Kevin Nagurski (@knagurski) July 27, 2017

The "Keep It Simple" Crowd

This is mostly where my own mind went:

focus on minimalist design and great typography.

— Zach Leatherman (@zachleat) July 27, 2017

I would avoid extremes: no extremely vivid colors; a classic font set; clean but not overly minimal; moderate border radiuses

— keith•j•grant (@keithjgrant) July 27, 2017

Focus on the typography.

— Daniel Sellers (@daniel_sellers) July 27, 2017

Keep it clean and minimal with excellent typography.

— ⌘ Sean Bunton (@seanbunton) July 27, 2017

Plus of course some nods to Motherfucking website and Better Motherfucking Website.

The "Nope" Crowd

An awful lot of folks would straight up just say no. Half?


— Cennydd (@Cennydd) July 27, 2017

To be fair, we didn't exactly set the stage with a lot of detail here. I bet some folks imagined these clients as dummies that don't know what they are asking.

Decline the business, because that’s not a reasonable expectation.

— Len Damico (@lendamico) July 27, 2017


— Jez McKean (@jezmck) July 27, 2017

I wonder if the client presented themselves well, clearly knew what they were asking, and were happy to pay for it, if many of these designers would have responded differently.


— Woman Code Warrior (@amberweinberg) July 28, 2017

Still, curious that so many designers didn't see any the challenge here, just the absurdity.

The "Let's Get Technical" Crowd

I'm partially in this group! What things can and should we reach for in this project, and what should we avoid?

A) system fonts
B) semantic HTML
C) let the browser's shifting sands on how to render the page update the site for you
D) git drnk

— Sean T. McBeth (@Sean_McBeth) July 28, 2017

mobile-first, svg graphics, time spent on ux analysis (card sort analysis etc), decoupled modular micro-service front+back end

— Adam Mackintosh (@agm1984) July 27, 2017

Honestly an SPA, no external dependents, and build an archive and entry mechanism to handle content. Wildcard redirect and buy 10 yr hosting

— Jeff (@etisdew) July 28, 2017

If you're looking for an actual answer besides no, maybe html only, inline styles, nothing dynamic, no external calls.

— David v2.9.5 (@davidlaietta) July 28, 2017

"No external calls" seems particularly smart here.

Based on experience and observation in my time in the industry, I'd say it's somewhere around 75% of websites are completely gone after 10 years, let alone particular URL's on those websites being exactly the same (reliable external resources).

The "It's About The Content" Crowd

Simple, driven by content

— Eric Shuff (@ericshuff) July 27, 2017

Focus on making it easy to publish and manage great content. Because great content can easily outlive 10 years.

— Deepti Boddapati (@DeeptiBoddapati) July 27, 2017

Very basic, content focus without all the pretty things.

— Bethany (@fleshskeleton) July 27, 2017

Minimal. Large text elements with a few images mixed in with the content and little color.

— Veronica Domeier (@hellodomeier) July 27, 2017

The "See Existing Examples" Crowd

Adopt the Craigslist design philosophy.

— Chris Cashdollar (@ccashdollar) July 27, 2017

Borrow from @daringfireball

— Joe Casabona (@jcasabona) July 27, 2017

Hand them the Google homepage.

— Darth Seh (@seh) July 27, 2017

https://t.co/wu3bjk5zoA that would work I guess

— Pascal (@murindwaz) July 27, 2017

Plus things like Wikipedia and Space Jam. Also see Brutalist Websites.

Interesting Takes

Our very own Robin Rendle had an interesting take. Due to population growth, growing networks, and mobile device ubiquity, they site may not want to be in English, especially if it has a global audience. Or at least, be in multiple major world languages.

Leave it to Sarah to come in for the side tackle:

shape the culture that uses the app/site to hate change

— Overactive Algorithm (@sarah_edo) July 28, 2017

And Christopher to give us some options to keep them on their toes:

A) say, “Is this one of those tricky Google job interview questions?”
B) say, “GeoCities lasted 10,000 years, so this will be easy.”
C) &#x1f440;

— Christopher Schmitt (@teleject) July 28, 2017

Why do any design at all?

text file on the server root

— Eric Bailey (@ericwbailey) July 27, 2017

Although I might argue in that case, you might as well make it an `.html` file instead of `.txt` so you can at least hyperlink things.

My Take

Clearly "it depends" on what this website is supposed to accomplish.

I can tell you what I picture though when I think about it.

I picture a business card style website. I picture black-and-white. I picture a clean and simple logo.

I picture really (really) good typography. Typography is way older than the web and will be around long after. Pick typefaces that have already stood the test of time.

I picture some, but minimal copy. Even words go stale.

I picture the bare minimum call to action. Probably just a displayed email address. I'd bet on email over phones.

Technically, I think you can count on HTML, CSS, and JavaScript. I don't think anything you can do now with those languages will be deprecated in 10 years.

Layout-wise, I'd look at doing as much as you can with viewport units. Screens will absolutely change in size and density in 10 years. Anything you can make SVG, make SVG. That will reduce worry about new screens. Responsive design is an obvious choice.

Anything that even passably smells like a trend, avoid.

Inputs will also definitely change. We're already starting to assume a touch screen. Presumably, you won't have to do anything overly interactive, but if you do, I wouldn't bet on a keyboard and mouse.

I'd also spend time on the hosting aspects. Register the domain name for the full 10 years. See if you can pre-buy hosting that long. Pick a company you're reasonably sure will last that long. Use a credit card that you're reasonable sure will last that long. Make sure anything that needs to renew renews automatically, like SSL certificates.

More thoughts, as always, welcome in the comments.

What is Timeless Web Design? is a post from CSS-Tricks

Chrome 60

Fri, 07/28/2017 - 12:10

The latest version of Chrome, version 60, is a pretty big deal for us front-end developers. Here’s the two most interesting things for me that just landed via Pete LePage where he outlines all the features in this video:

Direct Link to ArticlePermalink

Chrome 60 is a post from CSS-Tricks

Party Parrot

Thu, 07/27/2017 - 21:11

Just a bit of forking fun, kicked off by Tim Van Damme and the inimitable Party Parrot.

See the Pen :party: by Tim Van Damme (@maxvoltar) on CodePen.

See the Pen :party: by Chris Coyier (@chriscoyier) on CodePen.

See the Pen :party: by Steve Gardner (@steveg3003) on CodePen.

See the Pen :party: by Louis Hoebregts (@Mamboleoo) on CodePen.

See the Pen :party: by Martin Pitt (@nexii) on CodePen.

Party Parrot is a post from CSS-Tricks

The Ultimate Uploading Experience in 5 Minutes

Thu, 07/27/2017 - 17:25

Filestack is a web service that completely handles file uploads for your app.

Let's imagine a little web app together. The web app allows people to write reviews for anything they want. The give the review a name, type up their review, upload a photo, and publish it. Saving a name and text to a database is fairly easy, so the trickiest part about this little app is handling those photo uploads. Here's just a few considerations:

  • You'll need to design a UI. What does the area look like that encourage folks to pick a photo and upload it? What happens when they are ready to upload that photo and interact? You'll probably want to design that experience.
  • You'll likely want to support drag and drop, how is that going to work?
  • You'll probably want to show upload progress. That's just good UX.
  • A lot of people keep their files in Dropbox or other cloud services these days, can you upload from there?
  • What about multiple files? Might make sense to upload three or four images for a review!
  • Are you going to restrict sizes? The app should probably just handle that automatically, right?

That's certainly not a comprehensive list, but I think you can see how every bit of that is a bunch of design and development work. Well, hey, that's the job, right? It is, but the job is even more so about being smart with your time and money to make your app a success. Being smart here, in my opinion, is seriously looking at Filestack to give you a fantastic uploading experience, while you spend your time on your product vision, not already-solved problems.

With Filestack, as a developer, implementation is beautifully easy:

var client = filestack.init('yourApiKey'); client.pick(pickerOptions).then(function(result) { console.log(JSON.stringify(result.filesUploaded)) });

And you get an incredibly full featured picker like this:

Notice how the upload is so fast you barely notice it.

And that is completely configurable, of course. Their documentation is super nice, so figuring out how to do all that is no problem. Want to limit it to 3 files? Sure. Only images? Yep. Allow only particular upload integrations? That's your choice.

We started this talking about photos. Filestack is particularly good at handling those for you, in part because it means that you can offer a really great UX (upload whatever!) while also making sure you do all the right developer stuff with those photos. Like if a user uploads a 5 MB photo, no problem, you can allow them to drop it, alter it, and then you can serve a resized and optimized version of it.

Similiar stuff exists for video, audio, and other special file types. And don't let me stop you from checking out all the really fancy stuff, like facial recognition, filtering, collages, and all that.

Serve it from where? Well that's your call. If you really need to store your files somewhere specific, you can do that. Easier, you can let Filestack be your document store as well as your CDN for delivering the files.

Another thing to think about with this type of development work is maintenance. Rolling your own system means you're on your own when things change. Browsers evolve. The mobile landscape is ever-changing. API's change. Third-parties come and go and make life difficult. All of that stuff is abstracted away with Filestack.

With all this happening in JavaScript, Filestack works great with literally any way you're building a website. And if you have an iOS app as well, don't worry, they have an SDK for that too!

Direct Link to ArticlePermalink

The Ultimate Uploading Experience in 5 Minutes is a post from CSS-Tricks

The Browser Statistics That Matter

Thu, 07/27/2017 - 13:08

In which I argue that the only browser usage statistics that make sense use for decision making are the ones gathered from the website being worked on itself.

The reason you can’t use global statistics as a stand-in for your own is because they could be wildly wrong ... Sites like StatCounter that track the worldwide browser market are fascinating, but I’d argue largely exist as dinner party talk.

Direct Link to ArticlePermalink

The Browser Statistics That Matter is a post from CSS-Tricks

How to be evil (but please don’t!) – the modals & overlays edition

Wed, 07/26/2017 - 12:00

We've all been there. Landed on a website only to be slapped with a modal that looked something like the one below:

Hello darkness, my old friend.

For me that triggers a knee-jerk reaction: curse for giving them a pageview, close the tab, and never return. But there's also that off case when we might actually try to get to the info behind that modal. So the next step in this situation is to bring up DevTools in order to delete the modal and overlay, and maybe some other useless things that clutter up the page while we're at it.

This is where that page starts to dabble in evil.

We may not be able to get to the DevTools via "Inspect Element" because the context menu might be disabled. That one is easy, it only takes them one line of code:

addEventListener('contextmenu', e => e.preventDefault(), false);

But hey, no problem, that's what keyboard shortcuts are for, right? Right... unless there's a bit of JavaScript in the vein of the one below running:

addEventListener('keydown', e => { if(e.keyCode === 123 /* F12: Chrome, Edge dev tools */ || (e.shiftKey && e.ctrlKey && ( e.keyCode === 73 /* + I: Chrome, FF dev tools */ || e.keyCode === 67 /* + C: Chrome, FF inspect el */ || e.keyCode === 74 /* + J: Chrome, FF console */ || e.keyCode === 75 /* + K: FF web console */ || e.keyCode === 83 /* + S: FF debugger */ || e.keyCode === 69 /* + E: FF network */ || e.keyCode === 77 /* + M: FF responsive design mode */)) || (e.shiftKey && ( e.keyCode === 118 /* + F5: Firefox style editor */ || e.keyCode === 116 /* + F5: Firefox performance */)) || (e.ctrlKey && e.keyCode === 85 /* + U: Chrome, FF view source */)) { e.preventDefault(); } }, false);

Still, nothing can be done about the browser menus. That will finally bring up DevTools for us! Hooray! Delete those awful nodes and... see how the page refreshes because there's a mutation observer watching out for such actions. Something like the bit of code below:

addEventListener('DOMContentLoaded', e => { let observer = new MutationObserver((records) => { let reload = false; records.forEach((record) => { record.removedNodes.forEach((node) => { if(node.id === 'overlay' && !validCustomer()) reload = true; }); if(reload) window.location.reload(true); }); }); observer.observe( document.documentElement, { attributes: true, childList: true, subtree: true, attributeOldValue: true, characterData: true } ); });

"This is war!" mode activated! Alright, just change the modal and overlay styles then! Except, all the styles are inline, with !important on top of that, so there's no way we can override it all without touching that style attribute.

Oh, the !importance of it all...

Some people might remember about how 256 classes override an id, but this has changed in WebKit browsers in the meanwhile (still happens in Edge and Firefox though) and 256 IDs never could override an inline style anyway.

Well, just change the style attribute then. However, another "surprise" awaits: there's also a bit of JavaScript watching out for attribute changes:

if(record.type === 'attributes' && (record.target.matches && record.target.matches('body, #content, #overlay')) && record.attributeName === 'style' && !validCustomer()) reload = true;

So unless there are some styles that could help us get the overlay out of the way that the person coding this awful thing missed setting inline, we can't get past this by modifying style attributes.

The first thing to look for is the display property as setting that to none on the overlay solves all problems. Then there's the combo of position and z-index. Even if these are set inline on the actual overlay, if they're not set inline on the actual content below the overlay, then we have the option of setting an even higher z-index value rule for the content and bring it above the overlay. However, it's highly unlikely anyone would miss setting these.

If offsets (top, right, bottom, left) aren't set inline, they could help us shift the overlay off the page and the same goes for margin or translate properties. In a similar fashion, even if they're set inline on the overlay, but not on the actual content and on the parent of both the overlay and the content, then we can shift this parent laterally by something like 100vw and then shift just the content back into view.

At the end of the day, even a combination of opacity and pointer-events could work.

However, if the person coding the annoying overlay didn't miss setting any of these inline and we have that bit of JS... we cannot mess with the inline styles without making the page reload.

But wait! There's something that can override inline values with !important, something that's likely to be missed by many... and that's @keyframe animations! Adding the following bit of CSS in a new or already existing style element brings the overlay and modal behind the actual content:

#overlay { animation: a 1s infinite } @keyframes a { { 0%, to { z-index: -1 } } }

However, there might be a bit of JavaScript that prevents us from adding new style or link elements (to reference an external stylesheet) or modifying already existing ones to include the above bit of CSS, as it can be seen here.

if(record.type === 'characterData' && record.target.parentNode.tagName.toLowerCase() === 'style') reload = true; record.addedNodes.forEach((node) => { if(node.matches && node.matches('style:not([data-href]), link[rel="stylesheet"]')) reload = true; });

See the Pen How to be evil (but please don't) by Ana Tudor (@thebabydino) on CodePen.

But if there already is an external stylesheet, we could add our bit of CSS there and, save for checking for changes in a requestAnimationFrame loop, there is no way of detecting such a change (there have been talks about a style mutation observer, but currently, we don't yet have anything of the kind).

Of course, the animation property could also be set inline to none, which would prevent us from using this workaround.

But in that case, we could still disable JavaScript and remove the overlay and modal. Or just view the page source via the browser menu. The bottom line is: if the content is already there, behind the overlay, there is absolutely no sure way of preventing access to it. But trying to anyway is a sure way to get people to curse you.

I wrote this article because I actually experienced something similar recently. I eventually got to the content. It just took more time and it made me hate whoever coded that thing.

It could be argued that these overlays are pretty efficient when it comes to stopping non-technical people, which make up the vast majority of a website's visitors, but when they go to the trouble of trying to prevent bringing up DevTools, node deletions, style changes and the like, and pack the whole JavaScript that does this into hex on top of it all, they're not trying to stop just non-technical people.

How to be evil (but please don’t!) – the modals & overlays edition is a post from CSS-Tricks

One Illustration, Three SVG outputs

Tue, 07/25/2017 - 13:16

Let's say we draw the same vector graphic in different design applications and export each one as SVG for use on the web. Should we expect the same SVG file from each application?

On the one hand, we might expect each file to be the same because of our past history with exporting images. Applications have typically been consistent at saving JPGs, PNGs, and GIFs with perhaps minor differences to the overall file size.

On the other hand, SVG is different than our typical image file and our expectations might need to adapt to those differences. The output of SVG might be visual, but what we're really talking about is math and code. That means an SVG file is a little more comparable to a CSS file that has been compiled from a preprocessor. Your results may vary based on the method used to compile the code.

I decided to draw the same illustration in Illustrator, Sketch, and Figma and compare exported SVG code. This post will summarize those results.

About the Illustration

It's nothing fancy. A basic plus sign with a white fill on top of a circle with a black fill. The idea was to use something that's close to an icon that would be common on any site.

The illustration was drawn three times, once in each application. Making the icon in one application then copying and pasting it in another didn't seem like a fair comparison. This allowed each application to interpret the SVG from its own native tools. I'm not the most skilled illustrator, so I made the illustration once, then traced it in the other applications to ensure everything was to scale and that all the points were nearly identical.

About the Comparison

It's worth noting that this post is not at all concerned about the "best" export of the group. What's more interesting is (1) whether there are differences in how SVG code is compiled and (2) how those possible difference might affect a front-end workflow or even influence which application is better suited for a particular project.

Here's what is being compared:

  • File Size
  • Code Structure
  • Naming Standards

One more thing with mentioning is that we're assuming default export options in this comparison. Illustrator has a robust set of options that can completely change how an SVG file is saved, where the others do not. I decided to use the default Illustrator export settings, with the minor exception of not minifying the code on export.

All good? Let's look at the results.

Side by Side Code Comparison

See the Pen One Illustration, Three SVG Outputs by Geoff Graham (@geoffgraham) on CodePen.

Table Comparison Comparison Illustrator Sketch Figma Export Settings Yes No No File Size 701 Bytes 946 Bytes 1 KB XML Header No Yes No Includes <height> and <width> attributes No Yes Yes Includes viewBox Attribute Yes Yes Yes SVG ID Yes No No SVG ID Name Generated NA NA SVG Data Name Attribute Layer Name NA NA Title Tag (<title>) File Name Artboard Name Layer Name Description Tag (<description>) NA Created with Sketch Created with Figma Includes Definitions (<defs>) No Yes Yes Includes Groups (<g>) Yes Yes Yes Group ID Name NA Organized by Page Name, Artboard, Group, then Layer Organized by Frame, Group, then Layer Includes Use (<use>) No No Yes Comparison Summary

Those are some pretty interesting results. Like I mentioned earlier, the goal here is not to declare a winner of who does things best™ but to gauge whether there are differences — and there certainly are differences!

File Size

One huge benefit of SVG, in general, is its small file size versus raster images. That benefit shines in all three cases. For example, the same icon exported as a PNG in Sketch came out to 12KB. Sketch's SVG output is 97% savings from its PNG counterpart.

I'm not particularly sure that the differences in file sizes between the three results here are all that crucial, despite the fact that Illustrator's output results in a file size that is ~30% smaller than Figma's output. I only say that because it's likely that the SVG file that gets used in production gets minified and cached in the same fashion that makes it all very negligible.

That said, the fact that there is a file size difference at all might influence which build tools you use for your SVG workflow and how the settings for that build tool are configured.

Code Structure

The difference in file size really comes down to how each application structures the code it compiles. For example, where Figma is super into grouping and defining shapes and paths for the sake of making them more reusable in different contexts, Illustrator avoids them altogether and tends to make the file easier to drop inline.

Again, the goal is not to determine whether one approach is better than the other, but recognize that there are different philosophies going into the generated file and let that help determine the right tool for the current job. You might get a smaller file size in one instance, but perhaps more flexibility in another, depending on your needs and priorities.

Naming Standards

Another case in point is how Illustrator uses unique generated IDs on the <svg> element by default. That makes dropping the file inline much less likely to conflict with other inline files where a designer may have used the same file, artboard or layer names across multiple files. By contrast, neither Sketch nor Figma uses an ID directly on the SVG element.

There are build tools that will help craft ID and class names but, if you are tasked with editing an SVG file manually or have to use a file as it's provided to you for some reason, then knowing how an application names things might influence how you approach your work.

Wrapping Up

The biggest takeaway for me from this comparison is a reminder that SVG is code at the end of the day. The applications we use to illustration vector graphics are simply a GUI for creating code and the way that code gets written is likely to be different based on who is writing it.

It's really no different than something like the CodePen Rodeo (when is the next one, by the way?) where a single design is provided and many people go off to code it in their own ways. There is no "right" way to code it, but it's fun to see how different people take different paths to achieve the same deliverable.

The bottom line underscored by this comparison is that we can't take the assets we have for granted. As much as we may enjoy the fact that machines are willing to make decisions on our behalf, a front-end workflow is still overwhelmingly a subjective task and influences how we do our jobs.

One Illustration, Three SVG outputs is a post from CSS-Tricks

Simple Server Side Rendering, Routing, and Page Transitions with Nuxt.js

Mon, 07/24/2017 - 14:32

A bit of a wordy title, huh? What is server side rendering? What does it have to do with routing and page transitions? What the heck is Nuxt.js? Funnily enough, even though it sounds complex, working with Nuxt.js and exploring the benefits of isn't too difficult. Let's get started!

Server side rendering

You might have heard people talking about server side rendering as of late. We looked at one method to do that with React recently. One particularly compelling aspect is the performance benefits. When we render our HTML, CSS, and JavaScript on the server, we often have less JavaScript to parse both initially and on subsequent updates. This article does really well going into more depth on the subject. My favorite takeaway is:

By rendering on the server, you can cache the final shape of your data.

Instead of grabbing JSON or other information from the server, parsing it, then using JavaScript to create layouts of that information, we're doing a lot of those calculations upfront, and only sending down the actual HTML, CSS, and JavaScript that we need. This can reap a lot of benefits with caching, SEO, and speed up our apps and sites.

What is Nuxt.js?

Server side rendering sounds pretty nice, but you're probably wondering if it's difficult to set up. I've been using Nuxt.js for my Vue applications lately and found it surprisingly simple to work with. To be clear: you don't need to use Nuxt.js in particular to do server side rendering. I'm just a fan of this tool for many reasons. I ran some tests last month and found that Nuxt.js had even higher lighthouse scores out of the gate than Vue's PWA template, which I thought was impressive.

Nuxt.js is a higher-level framework that you can use with a CLI command that you can use to create universal Vue applications. Here are some, not all, of the benefits:

  • Server-Side Rendering
  • Automatic Code Splitting
  • Powerful Routing System
  • Great lighthouse scores out of the gate &#x1f40e;
  • Static File Serving
  • ES6/ES7 Transpilation
  • Hot reloading in Development
  • Pre-processors: SASS, LESS, Stylus, etc
  • Write Vue Files to create your pages and layouts!
  • My personal favorite: easily add transitions to your pages

Let's set up a basic application with some routing to see the benefits for ourselves.

Getting Set up

The first thing we need to do if you haven't already is download Vue's CLI. You can do so globally with this command:

npm install -g vue-cli # ... or ... yarn add global vue-cli

You will only need to do this once, not every time you use it.

Next, we'll use the CLI to scaffold a new project, but we'll use Nuxt.js as the template:

vue init nuxt/starter my-project cd my-project yarn # or... npm install npm run dev

You'll see the progress of the app being built and it will give you a dedicated development server to check out: This is what you'll see right away (with a pretty cool little animation):

Let's take a look at what's creating this initial view of our application at this point. We can go to the `pages` directory, and inside see that we have an `index.vue` page. If we open that up, we'll see all of the markup that it took to create that page. We'll also see that it's a `.vue` file, using single file components just like any ordinary `vue` file, with a template tag for the HTML, a script tag for our scripts, where we're importing a component, and some styles in a style tag. (If you aren't familiar with these, there's more info on what those are here.) The coolest part of this whole thing is that this `.vue` file doesn't require any special setup. It's placed in the `pages` directory, and Nuxt.js will automatically make this server-side rendered page!

Let's create a new page and set up some routing between them. In `pages/index.vue`, dump the content that's already there, and replace it with:

<template> <div class="container"> <h1>Welcome!</h1> <p><nuxt-link to="/product">Product page</nuxt-link></p> </div> </template> <style> .container { font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif; /* 1 */ padding: 60px; } </style>

Then let's create another page in the pages directory, we'll call it `product.vue` and put this content inside of it:

<template> <div class="container"> <h1>This is the product page</h1> <p><nuxt-link to="/">Home page</nuxt-link></p> </div> </template>

Right away, you'll see this:

Ta-da! &#x1f3c6;
Right away, we have server side rendering, routing between pages (if you check out the URL you can see it's going between the index page and product page), and we even have a sweet little green loader that zips across the top. We didn't have to do much at all to get that going.

You might have noticed in here, there's a special little element: <nuxt-link to="/">. This tag can be used like an a tag, where it wraps around a bit of content, and it will set up an internal routing link between our pages. We'll use to="/page-title-here" instead of an href.

Now, let's add some transitions. We’ll do this in a few stages: simple to complex.

Creating Page Transitions

We already have a really cool progress bar that runs across the top of the screen as we’re routing and makes the whole thing feel very zippy. (That’s a technical term). While I like it very much, it won’t really fit the direction we’re headed in, so let’s get rid of it for now.

We're going to go into our `nuxt.config.js` file and change the lines:

/* ** Customize the progress-bar color */ loading: { color: '#3B8070' },


loading: false,

You'll also notice a few other things in this nuxt.config.js file. You'll see our meta and head tags as well as the content that will be rendered inside of them. That's because we won't have a traditional `index.html` file as we do in our normal CLI build, Nuxt.js is going to parse and build our `index.vue` file together with these tags and then render the content for us, on the server. If you need to add CSS files, fonts, or the like, we would use this Nuxt.js config file to do so.

Now that we have all that down, let's understand what's available to us to create page transitions. In order to understand what’s happening on the page that we’re plugging into, we need to review how the transition component in Vue works. I've written an article all about this here, so if you'd like deeper knowledge on the subject, you can check that out. But what you really need to know is this: under the hood, Nuxt.js will plug into the functionality of Vue's transition component, and gives us some defaults and hooks to work with:

You can see here that we have a hook for what we want to happen right before the animation starts enter, during the animation/transition enter-active, and when it finishes. We have these same hooks for when something is leaving, prepended with leave instead. We can make simple transitions that just interpolate between states, or we could plug a full CSS or JavaScript animation into them.

Usually in a Vue application, we would wrap a component or element in <transition> in order to use this slick little functionality, but Nuxt.js will provide this for us at the get-go. Our hook for the page will begin with, thankfully- page. All we have to do to create an animation between pages is add a bit of CSS that plugs into the hooks:

.page-enter-active, .page-leave-active { transition: all .25s ease-out; } .page-enter, .page-leave-active { opacity: 0; transform-origin: 50% 50%; }

I'm also going to add an extra bit of styling here so that you can see the page transitions a little easier:

html, body { font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif; /* 1 */ background: #222; color: white; width: 100vw; height: 100vh; } a, a:visited { color: #3edada; text-decoration: none; } .container { padding: 60px; width: 100vw; height: 100vh; background: #444; }

Right now we're using a CSS Transition. This only gives us the ability to designate what to do in the middle of two states. We could do something a little more interesting by having an animation adjust in a way that suggests where something is coming from and going to. For that to happen, we could separate out transitions for page-enter and page-leave-active classes, but it's a little more DRY to use a CSS animation and specify where things are coming from and going to, and plug into each for .page-enter-active, and .page-leave-active:

.page-enter-active { animation: acrossIn .45s ease-out both; } .page-leave-active { animation: acrossOut .65s ease-in both; } @keyframes acrossIn { 0% { transform: translate3d(-100%, 0, 0); } 100% { transform: translate3d(0, 0, 0); } } @keyframes acrossOut { 0% { transform: translate3d(0, 0, 0); } 100% { transform: translate3d(100%, 0, 0); } }

Let's also add a little bit of special styling to the product page so we can see the difference between these two pages:

<style scoped> .container { background: #222; } </style>

This scoped tag is pretty cool because it will apply the styles for this page/vue file only. If you have heard of CSS Modules, you'll be familiar with this concept.

We would see this (this page is for demo purposes only, that's probably too much movement for a typical page transition):

Now, let's say we have a page with a totally different interaction. For this page, the movement up and down was too much, we just want a simple fade. For this case, we'd need to rename our transition hook to separate it out.

Let’s create another page, we’ll call it the contact page and create it in the pages directory.

<template> <div class="container"> <h1>This is the contact page</h1> <p><nuxt-link to="/">Home page</nuxt-link></p> </div> </template> <script> export default { transition: 'fadeOpacity' } </script> <style> .fadeOpacity-enter-active, .fadeOpacity-leave-active { transition: opacity .35s ease-out; } .fadeOpacity-enter, .fadeOpacity-leave-active { opacity: 0; } </style>

Now we can have two-page transitions:

You can see how we could build on these further and create more and more streamlined CSS animations per page. But from here let's dive into my favorite, JavaScript animations, and create page transitions with a bit more horsepower.

Javascript Hooks

Vue's <transition> component offers some hooks to use JavaScript animation in place of CSS as well. They are as follows, and each hook is optional. The :css="false" binding lets Vue know we're going to use JS for this animation:

<transition @before-enter="beforeEnter" @enter="enter" @after-enter="afterEnter" @enter-cancelled="enterCancelled" @before-Leave="beforeLeave" @leave="leave" @after-leave="afterLeave" @leave-cancelled="leaveCancelled" :css="false"> </transition>

The other thing we have available to us are transition modes. I'm a big fan of these, as you can state that one animation will wait for the other animation to finish transitioning out before transitioning in. The transition mode we will work with will be called out-in.

We can do something really wild with JavaScript and the transition mode, again, we're going a little nuts here for the purposes of demo, we would usually do something much more subtle:

In order to do something like this, I've run yarn add gsap because I'm using GreenSock for this animation. In my `index.vue` page, I can remove the existing CSS animation and add this into the <script> tags:

import { TweenMax, Back } from 'gsap' export default { transition: { mode: 'out-in', css: false, beforeEnter (el) { TweenMax.set(el, { transformPerspective: 600, perspective: 300, transformStyle: 'preserve-3d' }) }, enter (el, done) { TweenMax.to(el, 1, { rotationY: 360, transformOrigin: '50% 50%', ease: Back.easeOut }) done() }, leave (el, done) { TweenMax.to(el, 1, { rotationY: 0, transformOrigin: '50% 50%', ease: Back.easeIn }) done() } } }

All of the code for these demos exist in my Intro to Vue repo for starter materials if you're getting ramped up learning Vue.

One thing I want to call out here is that currently there is a bug for transition modes in Nuxt.js. This bug is fixed, but the release hasn't come out yet. It should be all fixed and up to date in the upcoming 1.0 release, but in the meantime, here is a working simple sample demo, and the issue to track.

With this working code and those JavaScript hooks we can start to get much fancier and create unique effects, with different transitions on every page:

Here's the site that the demo was deployed to if you'd like to see it live: https://nuxt-type.now.sh/ as well as the repo that houses the code for it: https://github.com/sdras/nuxt-type


In that last demo you might have noticed we had a common navigation across all of the pages what we routed. In order to create this, we can go into the `layouts` directory, and we'll see a file called `default.vue`. This directory will house the base layouts for all of our pages, "default" being the, uhm, default :)

Right away you'll see this:

<template> <div> <nuxt/> </div> </template>

That special <nuxt/> tag will be where our `.vue` pages files will be inserted, so in order to create a navigation, we could insert a navigation component like this:

<template> <div> <img class="moon" src="~assets/FullMoon2010.png" /> <Navigation /> <nuxt/> </div> </template> <script> import Navigation from '~components/Navigation.vue' export default { components: { Navigation } } </script>

I love this because everything is kept nice and organized between our global and local needs.

I then have a component called Navigation in a directory I've called `components` (this is pretty standard fare for a Vue app). In this file, you'll see a bunch of links to the different pages:

<nav> <div class="title"> <nuxt-link to="/rufina">Rufina</nuxt-link> <nuxt-link to="/prata">Prata</nuxt-link> <nuxt-link exact to="/">Playfair</nuxt-link> </div> </nav>

You'll notice I'm using that <nuxt-link> tag again even though it's in another directory, and the routing will still work. But that last page has one extra attribute, the exact attribute: <nuxt-link exact to="/">Playfair</nuxt-link> This is because there are many routes that match just the `/` directory, all of them do, in fact. So if we specify exact, Nuxt will know that we only mean the index page in particular.

Further Resources

If you'd like more information about Nuxt, their documentation is pretty sweet and has a lot of examples to get you going. If you'd like to learn more about Vue, I've just made a course on Frontend Masters and all of the materials are open source here, or you can check out our Guide to Vue, or you can go to the docs which are extremely well-written. Happy coding!

Simple Server Side Rendering, Routing, and Page Transitions with Nuxt.js is a post from CSS-Tricks

A Collection of Interesting Facts about CSS Grid Layout

Fri, 07/21/2017 - 13:24

A few weeks ago I held a CSS Grid Layout workshop. Since I'm, like most of us, also pretty new to the topic, I learned a lot while preparing the slides and demos.
I decided to share some of the stuff that was particularly interesting to me, with you.

Have fun!

Negative values lower than -1 may be used for grid-row-end and grid-column-end

In a lot of code examples and tutorials you will see that you can use grid-column-start: 1 and grid-column-end: -1 (or the shorthand grid-column: 1 / -1) to span an element from the first line to the last. My friend Max made me aware that it's possible to use lower values than -1 as well.

.grid-item { grid-column: 1 / -2; }

For example, you can set grid-column: 1 / -2 to span the cells between the first and the second to last line.

See the Pen Grid item from first to second to last by Manuel Matuzovic (@matuzo) on CodePen.

It's possible to use negative values in grid-column/row-start

Another interesting thing about negative values is that you can use them on grid-column/row-start as well. The difference between positive and negative values is that with negative values the placement will come from the opposite side. If you set grid-column-start: -2 the item will be placed on the second to last line.

.item { grid-column-start: -3; grid-row: -2; }

See the Pen Negative values in grid-column/row-start by Manuel Matuzovic (@matuzo) on CodePen.

Generated content pseudo-elements (::before and ::after) are treated as grid items

It may seem obvious that pseudo-elements generated with CSS become grid items if they're within a grid container, but I wasn't sure about that. So I created a quick demo to verify it. In the following Pen you can see that generated elements become grid- and flex-items if they are within a corresponding container.

See the Pen Experiment: Pseudo elements as grid items by Manuel Matuzovic (@matuzo) on CodePen.

Animating CSS Grid Layout properties

According to the CSS Grid Layout Module Level 1 specification there are 5 animatable grid properties:

  • grid-gap, grid-row-gap, grid-column-gap
  • grid-template-columns, grid-template-rows

Currently only the animation of grid-gap, grid-row-gap, grid-column-gap is implemented and only in Firefox and Firefox Mobile. I wrote a post about animating CSS Grid Layout properties, where you'll find some more details and a demo.

The value of grid-column/row-end can be lower than the start value

In level 4 of the CSS Grid Garden game I learned that the value of grid-column-end or grid-row-end may be lower than the respective start equivalent.

.item:first-child { grid-column-end: 2; grid-column-start: 4; }

The item in the above code will start on the 4th line and end on the 2nd, or in other words, start on the 2nd line and end on the 4th.

See the Pen Lower grid-column-end value than grid-column-start by Manuel Matuzovic (@matuzo) on CodePen.

Using the `span` keyword on grid-column/row-start and grid-column/row-end

A grid item by default spans a single cell. If you want to change that, the span keyword can be quite convenient. For example setting grid-column-start: 1 and grid-column-end: span 2 will make the grid item span two cells, from the first to the third line.

You can also use the span keyword with grid-column-start. If you set grid-column-end: -1 and grid-column-start: span 2 the grid-item will be placed at the last line and span 2 cells, from the last to third to last line.

See the Pen CSS Grid Layout: span keyword by Manuel Matuzovic (@matuzo) on CodePen.

grid-template-areas and implicit named lines

If you create template areas in a grid, you automatically get four implicit named lines, two naming the row-start and row-end and two for the column-start and column-end. By adding the -start or -end suffix to the name of the area, they're applicable like any other named line.

.grid { display: grid; grid-template-columns: 1fr 200px 200px; grid-template-areas: "header header header" "articles ads posts" } .footer { grid-column-start: ads-start; grid-column-end: posts-end; }

See an example for implicit named lines in this Pen.

Grid is available in the insider version of Microsoft Edge

Support for CSS Grid Layout is pretty great since all major browsers, except IE and Edge, support it. For a lot of projects you can start using CSS Grid Layouts today. Support for Microsoft Edge will probably come pretty soon, because it's already available in the insider version of Microsoft Edge.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari57445211*1610.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox10.3NoNo565954

If you want to learn more about Grids check out The Complete Guide to Grid, Getting Started with CSS Grid, Grid By Example and my Collection of Grid demos on CodePen.

A Collection of Interesting Facts about CSS Grid Layout is a post from CSS-Tricks

​Edit your website, from your website

Thu, 07/20/2017 - 13:44

Stuck making "a few easy changes" to the website for someone? Component IO makes it quick and simple for you or your team to make edits (even for non-technical users).

You can manage content with a WYSIWYG editor or instantly update HTML, CSS, and JavaScript right from your website. Make changes faster, empower your team, and avoid redeployment bugs. Works with every web technology, from WordPress to Rails to React.

Join hundreds of projects already using Component IO, with a free tier and plans from $7.95/mo. It's built to make web development easier for everyone.

Try it free

Direct Link to ArticlePermalink

​Edit your website, from your website is a post from CSS-Tricks

Playing with Shadow DOM

Thu, 07/20/2017 - 13:44

About a year ago, Twitter announced it would start displaying embedded tweets with the shadow DOM rather than an <iframe>, if the browser supports shadom DOM.

Why? Well, speed is one reason.

They say:

Much lower memory utilization in the browser, and much faster render times. Tweets will appear faster and pages will scroll more smoothly, even when displaying multiple Tweets on the same page.

Why the choice? Why is it necessary to use either iframes or shadow DOM? Why not just inject the content onto the page?

It's a totally understandable need for control. An embedded Tweet should look and behave just exactly like an embedded Tweet. They don't want to worry about the styles of the page bleeding in and messing that up.

An <iframe> makes style scoping very easy. Point the src of the iframe at a URL that displays what you want an embedded tweet to look like, and you're good. The only styles used will be those you include in that document.

Twitter does this iframe-injection in a progressive enhancement and syndication-friendly way. They provide a <blockquote> with the Tweet and a <script>. The script does the iframe-injection. If the script doesn't run, no matter, a happy blockquote. If the script does run, a fully functional embedded Tweet.

That script is the key here. Scripts can do just about anything, and they host it, so they can change it up anytime. That's what they use to detect shadow DOM support and go that route instead. And as we covered, shadow DOM is faster to render and has lower memory needs. Shadow DOM can also help with the style scoping thing, which we'll look at in a moment.

Height flexibility

There's another thing too, that happens to be near and dear to my heart. An <iframe> doesn't adjust in height to fit its contents like you expect other elements to do. You set a height and that's that. It will have scrollbars, if you allow it and the content needs it. Back in the Wufoo days, we had to jump quite a few hoops to get embedded forms (in frames) to be as tall as they needed to be. Today, at CodePen, our Embedded Pens have adjustable heights, but there isn't any option for just "as tall as they need to be". (I'm exactly sure if that makes sense for CodePen Embeds or not, but anyway, you can't do it right now.)

An element with a shadow DOM is just like any other element and that it will expand to the content naturally. I'm sure that is appealing to Twitter as well. If they calculate the height wrong, they run the risk of cutting of content or at least having the embedded Tweet look busted.

Most Basic Usage

Here's the bare minimum of how you establish a shadow DOM and put stuff in it:

See the Pen Most basic shadow DOM by Chris Coyier (@chriscoyier) on CodePen.

Notice how the styling within the Shadow DOM doesn't leak out to the regular paragraph element? Both paragraphs would be red if they did.

And notice how the paragraph inside the shadow DOM isn't sans-serif like the one outside? Well, normally that would be. Inherited styles still inherit through the shadow DOM (so in that way it's not quite as strong a barrier as an iframe). But, we're forcing it back to the initial state on purpose like this:

:host { all: initial; } Handling that fallback

My first thought for dealing with a fallback for browsers that don't support shadow DOM was that you could chuck the same exact content you were stuffing into the shadow DOM into an iframe with srcdoc, like...

<iframe srcdoc="the same content">

Or more likely this is something you're doing in JavaScript, so you'd test for support first, then either do the shadow DOM stuff or dynamically create the iframe:

See the Pen Shadow DOM Basic by Chris Coyier (@chriscoyier) on CodePen.

Turns out srcdoc isn't the best choice (alone) for a fallback as there is no IE or Edge support for it. But also that it's not too big of a deal to just use a data URL for the regular src. Here's a fork by Šime Vidas where he fixes that up:

let content = ` <style> body { /* for fallback iframe */ margin: 0; } p { border: 1px solid #ccc; padding: 1rem; color: red; font-family: sans-serif; } </style> <p>Element with Shadow DOM</p> `; let el = document.querySelector('.my-element'); if (document.body.attachShadow) { let shadow = el.attachShadow({ mode: 'open' }); // Allows JS access inside shadow.innerHTML = content; } else { let newiframe = document.createElement('iframe'); 'srcdoc' in newiframe ? newiframe.srcdoc = content : newiframe.src = 'data:text/html;charset=UTF-8,' + content; let parent = el.parentNode; parent.replaceChild(newiframe, el); } TL;DR
  • Shadow DOM is pretty cool.
  • It's comparable to an iframe in many ways, including style encapsulation. Embedded third-party content is a pretty good use case.
  • It's possible to use it while falling back to an iframe pretty easily.
  • It's a part of the larger world of web components, but you don't have to go all-in on all that if you don't want to.

Here's another simple demo (this one using a custom element), but instead of rolling our own back support, it's polyfilled.

Playing with Shadow DOM is a post from CSS-Tricks

Implementing Webmentions

Wed, 07/19/2017 - 18:46

We get a decent amount of comments on blog posts right here on CSS-Tricks (thanks!), but I'd also say the hay day for that is over. These days, if someone writes some sort of reaction to a blog post, it could be on their own blog, or more likely, on some social media site. It makes sense. That's their home base and it's more useful to them to keep their words there.

It's a shame, though. This fragmented conversation is slightly more useful for each individual person, it's less useful as a whole. There is no canonical conversation thread. That's what Webmentions are all about, an official spec! In a sense, they allow the conversation to be dispursed but brought all together in a canonical conversation thread on the main post.

Webmentions don't need to be an alternative to comments, although when you pop over to real Drew McLellan's post you'll see he's using them that way. They can be in addition to "regular" comments. Surely the idea of turning off regular comments is appealing from a community perspective (less asshats likely when you need to link to your published words) and a technical debt perspective.

Rachel Andrew also recently implemented them, and this is classic Jeremy Keith stuff.

Direct Link to ArticlePermalink

Implementing Webmentions is a post from CSS-Tricks

Intro to Hoodie and React

Wed, 07/19/2017 - 13:01

Let's take a look at Hoodie, the "Back-End as a Service" (BaaS) built specifically for front-end developers. I want to explain why I feel like it is a well-designed tool and deserves more exposure among the spectrum of competitors than it gets today. I've put together a demo that demonstrates some of the key features of the service, but I feel the need to first set the scene for its use case. Feel free to jump over to the demo repo if you want to get the code. Otherwise, join me for a brief overview.

Setting the Scene

It is no secret that JavaScript is eating the world these days and with its explosion in popularity, an ever-expanding ecosystem of tooling has arisen. The ease of developing a web app has skyrocketed in recent years thanks to these tools. Developer tools Prettier and ESLint give us freedom to write how we like and still output clean code. Frameworks like React and Vue provide indispensable models for creating interactive experiences. Build tools like Webpack and Babel allow us to use the latest and greatest language features and patterns without sacrificing speed and efficiency.

Much of the focus in JavaScript these days seems to be on front-end tools, but it does not mean there is no love to be found on the back-end. This same pattern of automation and abstraction is available on the server side, too, primarily in the form of what we call "Backend as a Service" (BaaS). This model provides a way for front end developers to link their web or mobile apps to backend services without the need to write server code.

Many of these services have been around for awhile, but no real winner has come forth. Parse, an early player in the space, was gobbled up by Facebook in 2013 and subsequently shut down. Firebase was acquired by Google and is slowly making headway in developing market share. Then only a few weeks ago, MongoDB announced their own BaaS, Stitch, with hopes of capitalizing on the market penetration of their DB.

BaaS Advantages

There are an overwhelming number of BaaS options, however, they all have the same primary advantages at their core.

  • Streamlined development: The obvious advantage of having no custom server is that it removes the need to develop one! This means your development team will perform less context switching and ultimately have more time to focus on core logic. No server language knowledge required!
  • No boilerplate servers: Many servers end up existing for the sole purpose of connecting a client with relevant data. This often results in massive amounts of web framework and DAL boilerplate code. The BaaS model removes the need for this repetitive code.

These are just the main advantages of BaaS. Hoodie provides these and many more unique capabilities that we will walk through in the next section.

Try on your Hoodie

To demonstrate some of the out-of-the-box functionality provided by Hoodie, I am going to walk you through a few pieces of a simple Markdown note taking web application. It is going to handle user authentication, full CRUD of users' notes, and the ability to keep working even when a connection to the internet is lost.

You can follow along with the code by cloning the hoodie-notes GitHub repository to your local machine and running it using the directions in the README.

This walkthrough is meant to focus on the implementation of the hoodie-client and thus, assumes prior knowledge of React, Redux, and ES6. Knowledge of these, although helpful, is not necessary to understand the scope of what we will discuss here.

The Basics

There are really only three things you have to do to get started with Hoodie.

  1. Place your static files in a folder called /public at the root of your project. We place our index.html and all transpiled JS and image files here so they can be exposed to clients.
  2. Initialize the Hoodie client in your front end code:

    const hoodie = new Hoodie({ url: window.location.origin, PouchDB: require('pouchdb-browser') })
  3. Start your hoodie server by running hoodie in the terminal

Of course, there is more to creating the app, but that is all you really need to get started!

User Auth

Hoodie makes user and session management incredibly simple. The Account API can be used to create users, manage their login sessions, and update their accounts. All code handling these API calls is stored in the user reducer.

When our app starts up, we see a login screen with the option to create a user or log in.

When either of these buttons are pressed, the corresponding Redux thunk is dispatched to handle the authentication. We use the signUp and signIn functions to handle these events. To create a new account, we make the following call:

hoodie.account.signUp({ username: 'guest', password: '1234' }) .then(account => { // successful creation }).catch(err => { // account creation failure })

Once we have an account in the system, we can login in the future with:

hoodie.account.signIn({ username: 'guest', password: '1234' }) .then(account => { // successful login }).catch(err => { // login failure })

We now have user authentication, authorization, and session management without writing a single line of server code. To add a cherry on top, Hoodie manages sessions in local storage, meaning that you can refresh the page without the need to log back in. To leverage this, we can execute the following logic the initial rendering of our app:

hoodie.account.get() .then({ session, username }=> { if (session) console.log(`${username} is already logged in!`) }).catch(err => { // session check failure })

And to logout we only need to call hoodie.account.signOut(). Cool!

CRUD Notes

Perhaps the nicest thing about user management in Hoodie is that all documents created while logged in are only accessible by that authenticated user. Authorization is entirely abstracted from us, allowing us to focus on the simple logic of creating, retrieving, updating, and deleting documents using the Store API. All code handling these API calls is stored in the notes reducer.

Let's start off with creating a new note:

hoodie.store.add({ title: '', text: '' }) .then(note => console.log(note)) .catch(err => console.error(err))

We can pass any object we would like to the add function, but here we create an empty note with a title and text field. In return, we are given a new object in the Hoodie datastore with its corresponding unique ID and the properties we gave it.

When we want to update that document, it is as simple as passing that same note back in with the updated (or even new) properties:

hoodie.store.update(note) .then(note => console.log(note)) .catch(err => console.error(err))

Hoodie handles all the diffing and associated logic that it takes to update the store. All we need to do is pass in the note to the update function. Then, when the user elects to delete that note, we pass its ID to the remove function:

hoodie.store.remove(note._id) .then(()=> console.log(`Removed note ${note._id}`)) .catch(err => console.error(err))

The last thing we need to do is retrieve our notes when the user logs back in. Since we are only storing notes in the datastore, we can go ahead and retrieve all of the user's documents with the findAll function:

hoodie.store.findAll() .then(notes => console.log(notes)) .catch(err => console.error(err))

If we wanted, we could use the find function to look up individual documents as well.

Putting all of these calls together, we've essentially replaced a /notes REST API endpoint that otherwise would have required a fair amount of boilerplate request handling and DAL code. You might say this is lazy, but I'd say we are working smart!

Monitoring the connection status

Hoodie was built with an offline-first mentality, meaning that it assumes that clients will be offline for extended periods of time during their session. This attitude prioritizes the handling of these events such that it does not produce errors, but instead allows users to keep working as usual without fear of data loss. This functionality is enabled under the hood by PouchDB and a clever syncing strategy, however, the developer using the hoodie-client does not need to be privy to this as it is all handled behind the scenes.

We'll see how this improves our user experience in a bit, but first let's see how we can monitor this connection using the Connection Status API. When the app first renders, we can establish listeners for our connection status on the root component like so:

componentDidMount() { hoodie.connectionStatus.startChecking({interval: 3000}) hoodie.connectionStatus.on('disconnect', () => this.props.updateStatus(false)) hoodie.connectionStatus.on('reconnect', () => this.props.updateStatus(true)) }

In this case, we tell Hoodie to periodically check our connection status and then attach two listeners to handle changes in connections. When either of these events fire, we update the corresponding value in our Redux store and adjust the connection indicator in the UI accordingly. This is all the code we need to alert the user that they have lost a connection to our server.

To test this, open up the app in a browser. You'll see the connection indicator in the top left of the app. If you stop the server while the page is still open, you will see the status change to "Disconnected" on the next interval.

While you are disconnected, you can continue to add, edit, and remove notes as you would otherwise. Changes are stored locally and Hoodie keeps track of the changes that are made while you are offline.

Once you're ready, turn the server back on and the indicator will once again change back to "Connected" status. Hoodie then syncs with the server in the background and the user is none the wiser about the lapse of connectivity (outside of our indicator, of course).

If you don't believe it's that easy, go ahead and refresh your page. You'll see that the data you created while offline is all there, as if you never lost the connection. Pretty incredible stuff considering we did nothing to make it happen!

Why I Like Hoodie

Hoodie is not the only BaaS offering by any means, but I consider it a great option for several reasons

  1. Simple API: In this walkthrough, we were able to cover 3 out of 4 of the Hoodie APIs. They are incredibly simple, without much superfluous functionality. I am a big fan of simplicity over complexity until the latter cannot be avoided and Hoodie definitely fits that bill.
  2. Free and self-hosted: Putting Hoodie into production yourself can seem like a drag, but I believe such a service gives you long-term assurance. Paid, hosted services require a bet on that service's reliability and longevity (see: Parse). This, along with vendor lock-in, keep me on the side of self-hosting when it makes sense.
  3. Open Source: No explanation needed here...support the OSS community!
  4. Offline-first: Hoodie provides a seamless solution to the relevant problem of intermittent connectivity and removes the burden of implementation from developers.
  5. Plugins: Hoodie supports 3rd party plugins to provide support for additional server-side functionality outside the scope of the API. It allows for some clever solutions when you begin to miss the flexibility of having your own server.
  6. Philosophy: The developers who built and support Hoodie have clearly thought hard about what the service represents and why they built it. Their promotion of openness, empowerment, and decentralization (among other things) is great to see at the core of an open source project. I love everything about this!

Before you make the call to cut ties with your server in favor of a BaaS like Hoodie, there are some things you should consider.

Do you favor increased development speed or future flexibility? If the former is your priority, then go with a BaaS! If you really care about performance and scale, you're probably better off spinning up your own server(s). This points toward using a BaaS for an MVP or light-weight app and creating a custom server for well-defined, complex applications.

Does your app require integration with any 3rd party services? If so, it is likely you will need the flexibility of your own server for implementing your own custom implementation logic rather than constrain yourself to a Hoodie plugin.

Lastly, the documentation for Hoodie is severely lacking. It will help you get started, but many API definitions are missing from the docs and you will have to fill in some of the blanks yourself. This is mitigated by the fact that the interface is extremely well thought out. Nonetheless, it makes for a frustrating experience if you are used to complete documentation.


For front end developers, using a BaaS is a great prospect when considering your options for creating a web application. It avoids the need for writing server logic and implementing what essentially amounts to a boilerplate REST API. Hoodie delivers this possibility, with the added bonus of a clean interface, simple user management, and offline-first capabilities.

If all you need is a simple CRUD application, consider using Hoodie for your next app!

Additional Resources

Intro to Hoodie and React is a post from CSS-Tricks

More Gotchas Getting Inline SVG Into Production—Part II

Tue, 07/18/2017 - 11:49

The following is a guest post by Rob Levin and Chris Rumble. Rob and Chris both work on the product design team at Mavenlink. Rob is also creator and host of the SVG Immersion Podcast and wrote the original 5 Gotchas article back in '14. Chris, is a UI and Motion Designer/Developer based out of San Francisco. In this article, they go over some additional issues they encountered after incorporating inline SVGs in to Mavenlink's flagship application more then 2 years ago. The article illustrations were done by Rob and—in the spirit of our topic—are 100% vector SVGs!

Wow, it's been over 2 years since we posted the 5 Gotchas Getting SVG Into Production article. Well, we've encountered some new gotchas making it time for another follow up post! We'll label these 6-10 paying homage to the first 5 gotchas in the original post :)

Gotcha Six: IE Drag & Drop SVG Disappears

If you take a look at the animated GIF above, you'll notice that I have a dropdown of task icons on the left, I attempt to drag the row outside of the sortable's container element, and then, when I drop the row back, the SVG icons have completely disappeared. This insidious bug didn't seem to happen on Windows 7 IE11 in my tests, but, did happen in Windows 10's IE11! Although, in our example, the issue is happening due to use of a combination of jQuery UI Sortable and the nestedSortable plugin (which needs to be able to drag items off the container to achieve the nesting, any sort of detaching of DOM elements and/or moving them in the DOM, etc., could result in this disappearing behavior. Oddly, I wasn't able to find a Microsoft ticket at time of writing, but, if you have access to a Windows 10 / IE11 setup, you can see for yourself how this will happen in this simple pen which was forked from fergaldoyle. The Pen shows the same essential disappearing behavior happening, but, this time it's caused by simply moving an element containing an SVG icon via JavaScript's appendChild.

A solution to this is to reset the href.baseVal attribute on all <use> elements that descend from event.target container element when a callback is called. For example, in the case of using Sortable, we were able to call the following method from inside Sortable's stop callback:

function ie11SortableShim(uiItem) { function shimUse(i, useElement) { if (useElement.href && useElement.href.baseVal) { // this triggers fixing of href for IE useElement.href.baseVal = useElement.href.baseVal; } } if (isIE11()) { $(uiItem).find('use').each(shimUse); } };

I've left out the isIE11 implementation, as it can be done a number of ways (sadly, most reliably through sniffing the window.navigator.userAgent string and matching a regex). But, the general idea is, find all the <use> elements in your container element, and then reassign their href.baseVal to trigger to IE to re-fetch those external xlink:href's. Now, you may have an entire row of complex nested sub-views and may need to go with a more brute force approach. In my case, I also needed to do:


to rerender the row. Your mileage may vary ;)

If you're experiencing this outside of Sortable, you likely just need to hook into some "after" event on whatever the parent/container element is, and then do the same sort of thing.

As I'm boggled by this IE11 specific issue, I'd love to hear if you've encountered this issue yourself, have any alternate solutions and/or greater understanding of the root IE issues, so do leave a comment if so.

Gotcha Seven: IE Performance Boosts Replacing SVG4Everybody with Ajax Strategy

In the original article, we recommended using SVG4Everybody as a means of shimming IE versions that don't support using an external SVG definitions file and then referencing via the xlink:href attribute. But, it turns out to be problematic for performance to do so, and probably more kludgy as well, since it's based off user agent sniffing regex. A more "straight forward" approach, is to use Ajax to pull in the SVG sprite. Here's a slice of our code that does this which is, essentially, the same as what you'll find in the linked article:

loadSprite = null; (function() { var loading = false; return loadSprite = function(path) { if (loading) { return; } return document.addEventListener('DOMContentLoaded', function(event) { var xhr; loading = true; xhr = new XMLHttpRequest(); xhr.open('GET', path, true); xhr.responseType = 'document'; xhr.onload = function(event) { var el, style; el = xhr.responseXML.documentElement; style = el.style; style.display = 'none'; return document.body.insertBefore(el, document.body.childNodes[0]); }; return xhr.send(); }); }; })(); module.exports = { loadSprite: loadSprite, };

The interesting part about all this for us, was that—on our icon-heavy pages—we went from ~15 seconds down to ~1-2 seconds (for first uncached page hit) in IE11.

Something to consider about using the Ajax approach, you'll need to potentially deal with a "flash of no SVG" until the HTTP request is resolved. But, in cases where you already have a heavy initial loading SPA style application that throws up a spinner or progress indicator, that might be a sunk cost. Alternatively, you may wish to just go ahead and inline your SVG definition/sprite and take the cache hit for better-percieved performance. If so, measure just how much you're increasing the payload.

Gotcha Eight: Designing Non-Scaling Stroke Icons

In cases where you want to have various sizes of the same icon, you may want to lock down the stroke sizes of those icons…

Why, what's the issue?

Imagine you have a height: 10px; width: 10px; icon with some 1px shapes and scale it to 15px. Those 1px shapes will now be 1.5px which ends up creating a soft or fuzzy icon due to borders being displayed on sub-pixel boundaries. This softness also depends on what you scale to, as that will have a bearing on whether your icons are on sub-pixel boundaries. Generally, it's best to control the sharpness of your icons rather than leaving them up to the will of the viewer's browser.

The other problem is more of a visual weight issue. As you scale a standard icon using fills, it scales proportionately...I can hear you saying "SVG's are supposed to that". Yes, but being able to control the stroke of your icons can help them feel more related and seen as more of a family. I like to think of it like using a text typeface for titling, rather than a display or titling typeface, you can do it but why when you could have a tight and sharp UI.

Prepping the Icon

I primarily use Illustrator to create icons, but plenty of tools out there will work fine. This is just my workflow with one of those tools. I start creating an icon by focusing on what it needs to communicate not really anything technical. After I'm satisfied that it solves my visual needs I then start scaling and tweaking it to fit our technical needs. First, size and align your icon to the pixel grid (⌘⌥Y in Illustrator for pixel preview, on a Mac) at the size you are going to be using it. I try to keep diagonals on 45° and adjust any curves or odd shapes to keep them from getting weird. No formula exists for this, just get it as close as you can to something you like. Sometimes I scrap the whole idea if it's not gonna work at the size I need and start from scratch. If it's the best visual solution but no one can identify it... it's not worth anything.

Exporting AI

I usually just use the Export As "SVG" option in Illustrator, I find it gives me a standard and minimal place to start. I use the Presentation Attributes setting and save it off. It will come out looking something like this:

<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 18 18"> <title>icon-task-stroke</title> <polyline points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/> <rect x="5.5" y="0.5" width="7" height="4" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/> <line x1="3" y1="4.5" x2="0.5" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/> <line x1="17.5" y1="4.5" x2="15" y2="4.5" fill="none" stroke="#b6b6b6" stroke-miterlimit="10"/> <polyline points="6 10 8 12 12 8" fill="none" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/> </svg>

I know you see a couple of 1/2 pixels in there! Seems like there are a few schools of thought on this. I prefer to have the stroke line up to the pixel grid as that is what will display in the end. The coordinates are placed on the 1/2 pixel so that your 1px stroke is 1/2 on each side of the path. It looks something like this (in Illustrator):

Gotcha Nine: Implementing Non-Scaling Stroke Clean Up

Our Grunt task, which Rob talks about in the previous article, cleans up almost everything. Unfortunately for the non-scaling-stroke you have some hand-cleaning to do on the SVG, but I promise it is easy! Just add a class to the paths on which you want to restrict stroke scaling. Then, in your CSS add a class and apply the attribute vector-effect: non-scaling-stroke; which should look something like this:

.non-scaling-stroke { vector-effect: non-scaling-stroke; } <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 18 18"> <title>icon-task-stroke</title> <polyline class="non-scaling-stroke" points="5.5 1.5 0.5 1.5 0.5 4.5 0.5 17.5 17.5 17.5 17.5 1.5 12.5 1.5" stroke="#b6b6b6" stroke-miterlimit="10"/> <rect class="non-scaling-stroke" x="5.5" y="0.5" width="7" height="4" stroke="#b6b6b6" stroke-miterlimit="10"/> <line class="non-scaling-stroke" x1="3" y1="4.5" x2="0.5" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/> <line class="non-scaling-stroke" x1="17.5" y1="4.5" x2="15" y2="4.5" stroke="#b6b6b6" stroke-miterlimit="10"/> <polyline class="non-scaling-stroke" stroke="currentcolor" points="6 10 8 12 12 8" stroke="#ffa800" stroke-miterlimit="10" stroke-width="1"/> </svg>

This keeps the strokes, if specified, from changing (in other words, the strokes will remain at 1px even if the overall SVG is scaled) when the SVG is scaled. We also add fill: none; to a class in our CSS script where we also control the stroke color as they will fill with #000000 by default.That's it! Now, you have beautiful pixel adherent strokes that will maintain stroke width!

And after all is said and done (and you have preprocessed via grunt-svgstore per the first article), your SVG will look like this in the defs file:

<svg> <symbol viewBox="0 0 18 18" id="icon-task-stroke"> <title>icon-task-stroke</title> <path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5 1.5h-5v16h17v-16h-5"/> <path class="non-scaling-stroke" stroke-miterlimit="10" d="M5.5.5h7v4h-7zM3 4.5H.5M17.5 4.5H15"/> <path class="non-scaling-stroke" stroke="currentColor" stroke-miterlimit="10" d="M6 10l2 2 4-4"/> </symbol> </svg> CodePen Example

The icon set on the left is scaling proportionately, and on the right, we are using the vector-effect: non-scaling-stroke;. If your noticing that your resized SVG icon's strokes are starting to look out of control, the above technique will give you the ability to lock those babies down.

See the Pen SVG Icons: Non-Scaling Stroke by Chris Rumble (@Rumbleish) on CodePen.

Gotcha Ten: Accessibility

With everything involved in getting your SVG icon system up-and-running, it's easy to overlook accessibility. That's a shame, because SVGs are inherently accessible, especially if compared to icon fonts which are known to not always play well with screen readers. At bare minimum, we need to sprinkle a bit of code to prevent any text embedded within our SVG icons from being announced by screen readers. Although we'd love to just add a <title> tag with alternative text and "call it a day", the folks at Simply Accessible have found that Firefox and NVDA will not, in fact, announce the <title> text.

Their recommendation is to apply the aria-hidden="true" attribute to the <svg> itself, and then add an adjacent span element with a .visuallyhidden class. The CSS for that visually hidden element will be hidden visually, but its text will available to the screen reader to announce. I'm bummed that it doesn't feel very semantic, but it may be a reasonable comprimise while support for the more intuitively reasonable <title> tag (and combinations of friends like role, aria-labelledby, etc.) work across both browser and screen reader implementations. To my mind, the aria-hidden on the SVG may be the biggest win, as we wouldn't want to inadvertantly set off the screen reader for, say, 50 icons on a page!

Here's the general patterns borrowed but alterned a bit from Simply Accessible's pen:

<a href="/somewhere/foo.html"> <svg class="icon icon-close" viewBox="0 0 32 32" aria-hidden="true"> <use xlink:href="#icon-close"></use> </svg> <span class="visuallyhidden">Close</span> </a>

As stated before, the two things interesting here are:

  1. aria-hidden attribute applied to prevent screen readers from announcing any text embedded within the SVG.
  2. The nasty but useful visuallyhidden span which WILL be announced by screen reader.

Honestly, if you would rather just code this with the <title> tag et al approach, I wouldn't necessarily argue with you as it this does feel kludgy. But, as I show you the code we've used that follows, you could see going with this solution as a version 1 implementation, and then making that switch quite easily when support is better…

Assuming you have some sort of centralized template helper or utils system for generating your use xlink:href fragments, it's quite easy to implement the above. We do this in Coffeescript, but since JavaScript is more universal, here's the code that gets resolved to:

templateHelpers = { svgIcon: function(iconName, iconClasses, iconAltText) { var altTextElement = iconAltText ? "" + iconAltText + "" : ''; var titleElement = iconTitle ? "<title>" + iconTitle + "</title>" : ''; iconClasses = iconClasses ? " " + iconClasses : ''; return this.safe.call(this, "<svg aria-hidden='true' class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>" + altTextElement); }, ...

Why are we putting the <title> tag as a child of <use> instead of the <svg>? According to Amelia Bellamy-Royds(Invited Expert developing SVG & ARIA specs @w3c. Author of SVG books from @oreillymedia), you will get tooltips in more browsers.

Here's the CSS for .visuallyhidden. If you're wondering why we're doing it this particular why and not, say, display: none;, or other familiar means, see Chris Coyier's article which explains this in depth:

.visuallyhidden { border: 0; clip: rect(0 0 0 0); height: 1px; width: 1px; margin: -1px; padding: 0; overflow: hidden; position: absolute; }

This code is not meant to be used "copy pasta" style, as your system will likely have nuanced differences. But, it shows the general approach, and, the important bits are:

  • the iconAltText, which allows the caller to provide alternative text if it seems appropriate (e.g. the icon is not purely decorative)
  • the aria-hidden="true" which now, is always placed on the SVG element.
  • the .visuallyhidden class will hide the element visually, while still making the text in that element available for screen readers

As you can see, it'd be quite easy to later refactor this code to use the <title> approach usually recommended down the road, and at least the maintainence hit won't be bad should we choose to do so. The relevant refactor changes would likely be similar to:

var aria = iconAltText ? 'role="img" aria-label="' + iconAltText + '"' : 'aria-hidden="true"'; return this.safe.call(this, "<svg " + aria + " class='icon-new " + iconClasses + "'><use xlink:href='#" + iconName + "'>" + titleElement + "</use></svg>");

So, in this version (credit to Amelia for the aria part!), if the caller passes alternative text in, we do NOT hide the SVG, and, we also do not use the visually hidden span technique, while adding the role and aria-label attributes to the SVG. This feels much cleaner, but the jury is out on whether screen readers are going to support this approach as well as using the visually hidden span technique. Maybe the experts (Amelia and Simply Accessible folks), will chime in on the comments :)

Bonus Gotcha: make viewBox width and height integers or scaling gets funky

If you have an SVG icon that you export with a resulting viewBox like: viewBox="0 0 100 86.81", you may have issues if you use transform: scale. For example, if your generally setting the width and height equal as is typical (e.g. 16px x 16px), you might expect that the SVG should just center itself in it's containing box, especially if you're using the defaults for preserveAspectRatio. But, if you attempt to scale it at all, you'll start to notice clipping.

In the following Adobe Illustrator screen capture, you see that "Snap to Grid" and "Snap to Pixel" are both selected:

The following pen shows the first three icons getting clipped. This particular icon (it's defined as a <symbol> and then referenced using the xlink:href strategy we've already went over), has a viewBox with non-integer height of 86.81, and thus we see the clipping on the sides. The next 3 examples (icons 4-6), have integer width and heights (the third argument to viewBox is width and the fourth is height), and does not clip.

See the Pen SVG Icons: Scale Clip Test 2 by Rob Levin (@roblevin) on CodePen.


The above challenges are just some of the ones we've encountered at Mavenlink having had a comprehensive SVG icon system in our application for well over 2 years now. The mysterious nature of some of these is par for the course given our splintered world of various browsers, screen readers, and operating systems. But, perhaps these additional gotchas will help you and your team to better harden your SVG icon implementations!

More Gotchas Getting Inline SVG Into Production—Part II is a post from CSS-Tricks

Musings on HTTP/2 and Bundling

Mon, 07/17/2017 - 12:43

HTTP/2 has been one of my areas of interest. In fact, I've written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:

If the user is on HTTP/2: You'll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.

I wasn't the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it's not bad advice in theory. HTTP/2's multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we're painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn't appreciate a little more simplicity?

As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I've received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.

The case against bundling

The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.

Why unbundling could be suboptimal

There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site's entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.

But there's a drawback to this we sometimes miss when we split assets too finely: Compression ratios. Generally speaking, smaller assets don't compress as well as larger ones. In fact, if some assets are too small, some server configurations will avoid compressing them altogether, as there are no practical gains to be made. Let's look at how well some popular JavaScript libraries compress:

Filename Uncompressed Size Gzip (Ratio %) Brotli (Ratio %) jquery-ui-1.12.1.min.js 247.72 KB 66.47 KB (26.83%) 55.8 KB (22.53%) angular-1.6.4.min.js 163.21 KB 57.13 KB (35%) 49.99 KB (30.63%) react-0.14.3.min.js 118.44 KB 30.62 KB (25.85%) 25.1 KB (21.19% jquery-3.2.1.min.js 84.63 KB 29.49 KB (34.85%) 26.63 KB (31.45%) vue-2.3.3.min.js 77.16 KB 28.18 KB (36.52%) zepto-1.2.0.min.js 25.77 KB 9.57 KB (37.14%) preact-8.1.0.min.js 7.92 KB 3.31 KB (41.79%) 3.01 KB (38.01%) rlite-2.0.1.min.js 1.07 KB 0.59 KB (55.14%) 0.5 KB (46.73%)

Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won't get as much benefit from compression.

Of course, there's more to performance than asset size. In the case of JavaScript, we may want to tip our hand toward smaller page/template-specific files because the initial load of a specific page will be more streamlined with regard to both file size and parse time. Even if those smaller assets don't compress as well individually. Personally, that would be my inclination if I were building an app. On traditional, synchronous "site"-like experiences, I'm not as inclined to pursue code-splitting.

Yet, there's more to consider than JavaScript. Take SVG sprites, for example. Where these assets are concerned, bundling appears more sensible. Especially for large sprite sets. I performed a basic test on a very large icon set of 223 icons. In one test, I served a sprited version of the icon set. In the other, I served each icon as individual assets. In the test with the SVG sprite, the total size of the icon set represents just under 10 KB of compressed data. In the test with the unbundled assets, the total size of the same icon set was 106 KB of compressed data. Even with multiplexing, there's simply no way 106 KB can be served faster than 10 KB on any given connection. The compression doesn't go far enough on the individualized icons to make up the difference. Technical aside: The SVG images were optimized by SVGO in both tests.

Browsers that don't support HTTP/2

Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers. This is at least the picture for now, and support is trending upward, but we're a long ways from ubiquitous support for the protocol in browsers.

What happens when a user talks to an HTTP/2 server with a browser that doesn't support it? The server falls back to HTTP/1. This means you're back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage caniuse.com's ability to analyze your analytics and see what your audience supports.

The reality check

Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you'd be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.

Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except "do your research". Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don't waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.

This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it's not going to be a big deal. So don't trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!

Jeremy Wagner is the author of Web Performance in Action, an upcoming title from Manning Publications. Use coupon code sswagner to save 42%.

Check him out on Twitter: @malchata

Musings on HTTP/2 and Bundling is a post from CSS-Tricks

Did CSS get more complicated since the late nineties?

Sat, 07/15/2017 - 12:53

Hidde de Vries gathers some of the early thinking about CSS:

There is quite a bit of information on the web about how CSS was designed. Keeping it simple was a core principle. It continued to be — from the early days and the first implementations in the late nineties until current developments now.

The four main design principles listed are fascinating:

  • Authors can specify as much or little as they want
  • It is not a programming language by design
  • They are agnostic as to which medium they are used for
  • It is stream-based

So... did it?

I think lots has changed since the early nineties, but not really things that touch on how we apply CSS to structured markup.

Direct Link to ArticlePermalink

Did CSS get more complicated since the late nineties? is a post from CSS-Tricks

Let’s say you wanna open source a little thing…

Fri, 07/14/2017 - 13:43

Let's say you've written a super handy little bit of JavaScript. Nice! Well done, you. Surely, the world can benefit from this. A handful of people, at least. No need to keep this locked up. You've benefitted from open source tremendously in your career. This is the perfect opportunity to give back!

Let's do this.

You're going to need to chuck it into a GitHub repo. That's like table stakes for open source. This is where people can find it, link to it, see the code, and all that. It's a place you can push changes to if you need to.

You'll need to pick a license for it. If the point of this is "giving back" you really do need to, otherwise, it's essentially like you have the exclusive copyright of it. It's somewhat counter-intuitive, but picking a license opens up usage, rather than tightening it.

You'll need to put a README in there. As amazingly self-documenting you think your code is, it isn't. You'll need some plain-language writing in there to explain what your thing is and does. Usage samples are vital.

You'll probably wanna chuck some demos in there, too. Maybe a whole `/demos/` directory so the proof can be in the pudding.

While you've made some demos, you might as well puts some tests in there. Those go hand-in-hand. Tests give people who might use your thing some peace of mind that it's going to work and that as an author you care about making sure it does. If you plan to keep your thing updated, tests will ensure you don't break things as you make changes.

Speaking of people using your thing... just how are they going to do that? You probably can't just leave a raw function theThing () in `the-thing.js`! This isn't the 1800's, they'll tell you. You didn't even wrap an IIFE around it?! You should have at least made it a singleton.

People are going to want to const theThing = require("the-thing.js"); that sucker. That's the CommonJS format, which seems reasonable. But that's kinda more for Node.js than the browser, so it also seems reasonable to use define() and return a function, otherwise known as AMD. Fortunatly there is UMD, which is like both of those at the same time.

Wait wait wait. ES6 has now firmly arrived and it has its own module format. People are for sure going to want to import { theThing } from './the-thing.js';, which means you're going to need to export function theThing() { }.

So what should you do here? Heck if I know. I'm sure someone smart will say something in the comments.

Whatever you decide though, you'll probably separate that "module" concern away from your authored code. Perhaps a build process can help get all that together for you. Perhaps you can offer your thing in all the different module formats, so everybody is happy. What do you use for that build process? Grunt? Gulp? Too old? The hip thing these days is an npm script, which is just some terminal commands put into a `package.json` file.

While your build processing, you might as well have that thing run your tests. You also might as well create an uglified version. Every library has a `the-thing.min.js`, right? You might as well put all that auto generated stuff into a `/dist/` folder. Maybe you'll even be super hip and not check that into the repo. If you want this thing, you grab the repo and build it yourself!

Still, you do want people to use this thing. Having the public repo isn't enough, it's also going to need to go on npm, which is a whole different site and world. Hopefully, you've done some naming research early, as you'll need to name your package and that must be unique. There is a variety of other stuff to know here too, like making sure the repo is clean and you've ignored things you don't want everyone to download with the package.

Wait what about Yarn? Isn't that the new npm or something? You might want to publish there too. Some folks are hanging onto Bower as well, so you might wanna make your thing work there too.

OK! Done! Better take a quick nap and get ready for the demands to start rolling in on the issues board!

Just kidding everything will be fine.

Let’s say you wanna open source a little thing… is a post from CSS-Tricks