CSS-Tricks

Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 8 hours 4 min ago

Form Validation with Web Audio

Fri, 08/25/2017 - 13:27

I've been thinking about sound on websites for a while now.

When we talk about using sound on websites, most of us grimace and think of the old days, when blaring background music played when the website loaded.

Today this isn't and needn't be a thing. We can get clever with sound. We have the Web Audio API now and it gives us a great deal of control over how we design sound to be used within our web applications.

In this article, we'll experiment with just one simple example: a form.

What if when you were filling out a form it gave you auditory feedback as well as visual feedback. I can see your grimacing faces! But give me a moment.

We already have a lot of auditory feedback within the digital products we use. The keyboard on a phone produces a tapping sound. Even if you have "message received" sounds turned off, you're more than likely able to hear your phone vibrate. My MacBook makes a sound when I restart it and so do games consoles. Auditory feedback is everywhere and pretty well integrated, to the point that we don't really think about it. When was the last time you grimaced at the microwave when it pinged? I bet you're glad you didn't have to watch it to know when it was done.

As I'm writing this article my computer just pinged. One of my open tabs sent me a useful notification. My point being sound can be helpful. We may not all need to know with our ears whether we've filled a form incorrectly, there may be plenty of people out there that do find it beneficial.

So I'm going to try it!

Why now? We have the capabilities at our finger tips now. I already mentioned the Web Audio API, we can use this to create/load and play sounds. Add this to HTML form validating capabilities and we should be all set to go.

Let's start with a small form.

Here's a simple sign up form.

See the Pen Simple Form by Chris Coyier (@chriscoyier) on CodePen.

We can wire up a form like this with really robust validation.

With everything we learned from Chris Ferdinandi's guide to form validation, here's a version of that form with validation:

See the Pen Simple Form with Validation by Chris Coyier (@chriscoyier) on CodePen.

Getting The Sounds Ready

We don't want awful obtrusive sounds, but we do want those sounds to represent success and failure. One simple way to do this would be to have a higher, brighter sounds which go up for success and lower, more distorted sounds that go down for failure. This still gives us very broad options to choose from but is a general sound design pattern.

With the Web Audio API, we can create sounds right in the browser. Here are examples of little functions that play positive and negative sounds:

See the Pen Created Sounds by Chris Coyier (@chriscoyier) on CodePen.

Those are examples of creating sound with the oscillator, which is kinda cool because it doesn't require any web requests. You're literally coding the sounds. It's a bit like the SVG of the sound world. It can be fun, but it can be a lot of work and a lot of code.

While I was playing around with this idea, FaceBook released their SoundKit, which is:

To help designers explore how sound can impact their designs, Facebook Design created a collection of interaction sounds for prototypes.

Here's an example of selecting a few sounds from that and playing them:

See the Pen Playing Sound Files by Chris Coyier (@chriscoyier) on CodePen.

Another way would be to fetch the sound file and use the audioBufferSourceNode. As we're using small files there isn't much overhead here, but, the demo above does fetch the file over the network everytime it is played. If we put the sound in a buffer, we wouldn't have to do that.

Figuring Out When to Play the Sounds

This experiment of adding sounds to a form brings up a lot of questions around the UX of using sound within an interface.

So far, we have two sounds, a positive/success sound and a negative/fail sound. It makes sense that we'd play these sounds to alert the user of these scenarios. But when exactly?

Here's some food for thought:

  • Do we play sound for everyone, or is it an opt-in scenario? opt-out? Are there APIs or media queries we can use to inform the default?
  • Do we play success and fail sounds upon form submission or is it at the individual input level? Or maybe even groups/fieldsets/pages?
  • If we're playing sounds for each input, when do we do that? On blur?
  • Do we play sounds on every blur? Is there different logic for success and fail sounds, like only one fail sound until it's fixed?

There aren't any extremely established best practices for this stuff. The best we can do is make tasteful choices and do user research. Which is to say, the examples in this post are ideas, not gospel.

Demo

Here's one!

View Demo

And here's a video, with sound, of it working:

Voice

Greg Genovese has an article all about form validation and screen readers. "Readers" being relevant here, as that's all about audio! There is a lot to be done with aria roles and moving focus and such so that errors are clear and it's clear how to fix them.

The Web Audio API could play a role here as well, or more likely, the Web Speech API. Audio feedback for form validation need not be limited to screen reader software. It certainly would be interesting to experiment with reading out actual error messages, perhaps in conjunction with other sounds like we've experimented with here.

Thoughts

All of this is what I call Sound Design in Web Design. It's not merely just playing music and sounds, it's giving the sound scape thought and undertaking some planning and designing like you would with any other aspect of what you design and build.

There is loads more to be said on this topic and absolutely more ways in which you can use sound in your designs. Let's talk about it!

Form Validation with Web Audio is a post from CSS-Tricks

So you need a CSS utility library?

Thu, 08/24/2017 - 23:04

Let's define a CSS utility library as a stylesheet with many classes available to do small little one-off things. Like classes to adjust margin or padding. Classes to set colors. Classes to set specific layout properties. Classes for sizing. Utility libraries may approach these things in different ways, but seem to share that idea. Which, in essence, brings styling to the HTML level rather than the CSS level. The stylesheet becomes a dev dependency that you don't really touch.

Using ONLY a utility library vs. sprinkling in utilities

One of the ways you can use a utility library like the ones to follow as an add-on to whatever else you're doing with CSS. These projects tend to have different philosophies, and perhaps don't always encourage that, but of course, you can do whatever you want. You could call that sprinkling in a utility library, and you might end up with HTML like:

<div class="module padding-2"> <h2 class="section-header color-primary">Tweener :(</h2> </div>

Forgive a little opinion-having here, but to me, this seems like something that will feel good in the moment, and then be regrettable later. Instead of having all styling done by your own named classes, styling information is now scattered. Some styling information applied directly in the HTML via the utility classes, and some styling is applied through your own naming conventions and CSS.

The other option is to go all in on a utility library, that way you've moved all styling information away from CSS and into HTML entirely. It's not a scattered system anymore.

I can't tell you if you'll love working with an all in utility library approach like this or not, but long-term, I imagine you'll be happier picking either all-in or not-at-all than a tweener approach.

This is one of the definitions of Atomic CSS

You can read about that here. You could call using a utility library to do all your styling a form of "static" atomic CSS. That's different from a "programatic" version, where you'd process markup like this:

<div class="Bd Bgc(#0280ae):h C(#0280ae) C(#fff):h P(20px)"> Lorem ipsum </div>

And out would come CSS that accommodates that.

Utility Libraries

Lemme just list a bunch of them that I've come across, pick out some quotes of what they have to say about themselves, and a code sample.

Shed.css

Shed.css came about after I got tired of writing CSS. All of the CSS in the world has already been written, and there's no need to rewrite it in every one of our projects.

Goal: To eliminate distraction for developers and designers by creating a set of options rather than encouraging bikeshedding, where shed gets its name.

<button class=" d:i-b f-w:700 p-x:3 p-y:.7 b-r:.4 f:2 c:white bg:blue t-t:u hover/bg:blue.9 " > Log In </button> Tachyons

Create fast loading, highly readable, and 100% responsive interfaces with as little CSS as possible.

<div class="mw9 center pa4 pt5-ns ph7-l"> <time class="f6 mb2 dib ttu tracked"><small>27 July, 2015</small></time> <h3 class="f2 f1-m f-headline-l measure-narrow lh-title mv0"> <span class="bg-black-90 lh-copy white pa1 tracked-tight"> Too many tools and frameworks </span> </h3> <h4 class="f3 fw1 georgia i">The definitive guide to the JavaScript tooling landscape in 2015.</h4> </div> Basscss

Using clear, humanized naming conventions, Basscss is quick to internalize and easy to reason about while speeding up development time with more scalable, more readable code.

<div class="flex flex-wrap items-center mt4"> <h1 class="m0">Basscss <span class="h5">v8.0.2</span></h1> <p class="h3 mt1 mb1">Low-Level CSS Toolkit <span class="h6 bold caps">2.13 KB</span></p> <div class="flex flex-wrap items-center mb2"> </div> </div> Beard

A CSS framework for people with better things to do

Beard's most popular and polarizing feature is its helper classes. Many people feel utility classes like the ones that Beard generates for you leads to bloat and are just as bad as using inline styles. We've found that having a rich set of helper classes makes your projects easier to build, easier to reason, and more bulletproof.

<div class="main-content md-ph6 pv3 md-pv6"> <h2 class="tcg50 ft10 fw3 mb2 md-mb3">Tools</h2> <p class="tcg50 ft5 fw3 mb4 lh2">Beard isn't packed full of every feature you might need, but it does come with a small set of mixins to make life easier.</p> <h3 class="tcg50 ft8 fw3 mb2 md-mb3">appearance()</h3> </div> turretcss

Developed for design, turretcss is a styles and browser behaviour normalisation framework for rapid development of responsive and accessible websites.

<section class="background-primary padding-vertical-xl"> <div class="container"> <h1 class="display-title color-white">Elements</h1> <p class="lead color-white max-width-s">A guide to the use of HTML elements and turretcss's default styling definitions including buttons, figure, media, nav, and tables.</p> </div> </section> Expressive CSS
  • Classes are for visual styling. Tags are for semantics.
  • Start from a good foundation of base html element styles.
  • Use utility classes for DRY CSS.
  • Class names should be understandable at a glance.
  • Responsive layout styling should be easy (fun even).
<section class="grid-12 pad-3-vert s-pad-0"> <div class="grid-12 pad-3-bottom"> <h3 class="h1 pad-3-vert text-light text-blue">Principles</h3> </div> <div class="grid-12 pad-3-bottom"> <h4 class="pad-1-bottom text-blue border-bottom marg-3-bottom">Do classes need to be ‘semantic’?</h4> <p class="grid-12 text-center"> <span class="bgr-green text-white grid-3 s-grid-12 pad-2-vert pad-1-sides">Easy to understand</span> <span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">+</span> <span class="bgr-green text-white grid-3 m-grid-4 s-grid-12 pad-2-vert pad-1-sides">Easy to add/remove</span> <span class="grid-1 s-grid-12 pad-2-vert s-pad-1-vert pad-1-sides text-green">=</span> <span class="bgr-green text-white grid-2 m-grid-3 s-grid-12 pad-2-vert pad-1-sides">Expressive</span> </p> </div> </section> Tailwind CSS

A Utility-First CSS Framework for Rapid UI Development

This thing doesn't even exist yet and they have more than 700 Twitter followers. That kind of thing convinces me there is a real desire for this stuff that shouldn't be ignored. We can get a peak at their promo site though:

<div class="constrain-md md:constrain-lg mx-auto pt-24 pb-16 px-4"> <div class="text-center border-b mb-1 pb-20"> <div class="mb-8"> <div class="pill h-20 w-20 bg-light p-3 flex-center flex-inline shadow-2 mb-5"> </div> </div> </div> </div> Utility Libraries as Style Guides Marvel

As Marvel continues to grow, both as a product and a company, one challenge we are faced with is learning how to refine the Marvel brand identity and apply it cohesively to each of our products. We created this styleguide to act as a central location where we house a live inventory of UI components, brand guidelines, brand assets, code snippets, developer guidelines and more.

<div class="marginTopBottom-l textAlign-center breakPointM-marginTop-m breakPointM-textAlign-left breakPointS-marginTopBottom-xl"> <h2 class="fontSize-xxxl">Aspect Ratio</h2> </div> Solid

Solid is BuzzFeed's CSS style guide. Influenced by frameworks like Basscss, Solid uses immutable, atomic CSS classes to rapidly prototype and develop features, providing consistent styling options along with the flexibility to create new layouts and designs without the need to write additional CSS.

<div class="xs-col-12 sm-col-9 lg-col-10 sm-offset-3 lg-offset-2"> <div class="xs-col-11 xs-py3 xs-px1 xs-mx-auto xs-my2 md-my4 card"> <h1 class="xs-col-11 sm-col-10 xs-mx-auto xs-border-bottom xs-pb3 xs-mb4 sm-my4">WTF is Solid?</h1> <div class="xs-col-11 sm-col-10 xs-mx-auto"> <section class="xs-mb6"> <h2 class="bold xs-mb2">What is Solid?</h2> </section> <section class="xs-mb6"> <h2 class="bold xs-mb2">Installation</h2> <p class="xs-mb2">npm install --save bf-solid</p> </section> <section class="xs-mb6 xs-hide sm-block"> <h2 class="bold xs-mb2">Download</h2> <p> <a href="#" download="" class="button button--secondary xs-mr1 xs-mb1">Source Files</a> </p> </section> </div> </div> </div> This is separate-but-related to the idea of CSS-in-JS

The tide in JavaScript has headed strongly toward components. Combining HTML and JavaScript has felt good to a lot of folks, so it's not terribly surprising to see styling start to come along for the ride. And it's not entirely just for the sake of it. There are understandable arguments for it, including things like the global nature of CSS leading toward conflicts and unintended side effects. If you can style things in such a way that never happens (which doesn't mean you need to give up on CSS entirely), I admit I can see the appeal.

This idea of styling components at the JavaScript level does seem to largely negate the need for utility libraries. Probably largely a one or the other kind of thing.

So you need a CSS utility library? is a post from CSS-Tricks

Cross Browser Testing with CrossBrowserTesting

Thu, 08/24/2017 - 12:56

(This is a sponsored post.)

Say you do your development work on a Mac, but you'd like to test out some designs in Microsoft Edge, which doesn't have macOS version. Or vice versa! You work on a PC and you need to test on Safari, which no longer makes a Windows version.

It's a classic problem, and one I've been dealing with for a decade. I remember buying a copy of Windows Vista, buying software to manage virtual machines, and spending days just getting a testing environment set up. You can still go down that road, if you, ya know, love pain. Or you can use CrossBrowserTesting and have a super robust testing environment for a huge variety of browsers/platforms/versions without ever leaving the comfort of your favorite browser.

It's ridiculously wonderful.

Getting started, the most basic thing you can do is pick a browser/platform, specify a URL, and fire it up!

Once the test is running, you can interact with it just as you might suspect. Click, scroll, enter forms... it's a real browser! You have access to all the developer tools you might suspect. So for example, you can pop open the DevTools in Edge and poke around to figure out a bug.

When you need to do testing like this, it's likely you're in development, not in production. So how do you test that? Certainly, CrossBrowserTesting's servers can't see your localhost! Well, they can if you let them. They have a browser extension that allows you to essentially one-click-allow testing of local sites.

One of the things I find myself reaching to CrossBrowserTesting for is for getting layouts working across different browsers. If you haven't heard, CSS grid is here! It's supported in a lot of browsers, but not all, and not in the exact same way.

CrossBrowserTesting is the perfect tool to help me with this. I can pop open what I'm working on there, make changes, and get it working just how I need to. Perhaps getting the layout replicated in a variety of browsers, or just as likely, crafting a fallback that is different but looks fine.

Notice in that screenshot above the demo is on CodePen. That's relevant as CrossBrowserTesting allows you to test on CodePen for free! It's a great use case for something like Live View, where you can be working on a Pen, save it, and have the changes immediately reflected in the Live View preview, which works great even through CrossBrowserTesting.

The live testing is great, but there is also screenshot-based visual testing, in case you want to, say, test a layout in dozens of browsers at once. Much more practical to view a thumbnail grid all at once!

And there is even more advanced stuff. CrossBrowserTesting has automated testing features that make functional testing and visual testing on real browsers simple. Using Selenium, an open source testing framework, I can write scripts in the language of my choice that mimic a real user's actions: logging into the app, purchasing a plan, and creating a new project. I can then run the tests on CrossBrowserTesting, making sure that these actions work across browsers and devices. Because CrossBrowserTesting is in the cloud, I can run my tests against production websites and applications that bring in revenue.

Functional testing can be a life saver, assuring that everything is working and your customers can properly interact with your product. Once these tests have run, I can even see videos or screenshots of failures, and start debugging from there.

Direct Link to ArticlePermalink

Cross Browser Testing with CrossBrowserTesting is a post from CSS-Tricks

Quantum CSS

Wed, 08/23/2017 - 12:59

"Quantum CSS" is the new name for "Stylo", which is the new CSS rendering engine, a part of "Project Quantum" which is the project name to rewrite all of Firefox's internals, which will be called "Servo". I think there was a company memo to use the "replace a jet engine while the jet is flying" metaphor, but it's apt.

It's fascinating, but ultimately the win is for users of Firefox. Lin Clark:

It takes advantage of modern hardware, parallelizing the work across all of the cores in your machine. This means it can run up to 2 or 4 or even 18 times faster.

With any luck, CSS developers won't notice anything but the speed either.

Direct Link to ArticlePermalink

Quantum CSS is a post from CSS-Tricks

Implementing Push Notifications: The Back End

Wed, 08/23/2017 - 12:36

In the first part of this series we set up the front end with a Service Worker, a `manifest.json` file, and initialized Firebase. Now we need to create our database and watcher functions.

Article Series:
  1. Setting Up & Firebase
  2. The Back End (You are here)
Creating a Database

Log into Firebase and click on Database in the navigation. Under Data you can manually add database references and see changes happen in real-time.

Make sure to adjust the rule set under Rules so you don't have to fiddle with authentication during testing.

{ "rules": { ".read": true, ".write": true } } Watching Database Changes with Cloud Functions

Remember the purpose of all this is to send a push notification whenever you publish a new blog post. So we need a way to watch for database changes in those data branches where the posts are being saved to.

With Firebase Cloud Functions we can automatically run backend code in response to events triggered by Firebase features.

Set up and initialize Firebase SDK for Cloud Functions

To start creating these functions we need to install the Firebase CLI. It requires Node v6.11.1 or later.

npm i firebase-tools -g

To initialize a project:

  1. Run firebase login
  2. Authenticate yourself
  3. Go to your project directory
  4. Run firebase init functions

A new folder called `functions` has been created. In there we have an `index.js` file in which we define our new functions.

Import the required Modules

We need to import the Cloud Functions and Admin SDK modules in `index.js` and initialize them.

const admin = require('firebase-admin'), functions = require('firebase-function') admin.initializeApp(functions.config().firebase)

The Firebase CLI will automatically install these dependencies. If you wish to add your own, modify the `package.json`, run npm install, and require them as you normally would.

Set up the Watcher

We target the database and create a reference we want to watch. In our case, we save to a posts branch which holds post IDs. Whenever a new post ID is added or deleted, we can react to that.

exports.sendPostNotification = functions.database.ref('/posts/{postID}').onWrite(event => { // react to changes }

The name of the export, sendPostNotification, is for distinguishing all your functions in the Firebase backend.

All other code examples will happen inside the onWrite function.

Check for Post Deletion

If a post is deleted, we probably shouldn't send a push notification. So we log a message and exit the function. The logs can be found in the Firebase Console under Functions → Logs.

First, we get the post ID and check if a title is present. If it is not, the post has been deleted.

const postID = event.params.postID, postTitle = event.data.val() if (!postTitle) return console.log(`Post ${postID} deleted.`) Get Devices to show Notifications to

In the last article we saved a device token in the updateSubscriptionOnServer function to the database in a branch called device_ids. Now we need to retrieve these tokens to be able to send messages to them. We receive so called snapshots which are basically data references containing the token.

If no snapshot and therefore no device token could be retrieved, log a message and exit the function since we don't have anybody to send a push notification to.

const getDeviceTokensPromise = admin.database() .ref('device_ids') .once('value') .then(snapshots => { if (!snapshots) return console.log('No devices to send to.') // work with snapshots } Create the Notification Message

If snapshots are available, we need to loop over them and run a function for each of them which finally sends the notification. But first, we need to populate it with a title, body, and an icon.

const payload = { notification: { title: `New Article: ${postTitle}`, body: 'Click to read article.', icon: 'https://mydomain.com/push-icon.png' } } snapshots.forEach(childSnapshot => { const token = childSnapshot.val() admin.messaging().sendToDevice(token, payload).then(response => { // handle response } } Handle Send Response

In case we fail to send or a token got invalid, we can remove it and log out a message.

response.results.forEach(result => { const error = result.error if (error) { console.error('Failed delivery to', token, error) if (error.code === 'messaging/invalid-registration-token' || error.code === 'messaging/registration-token-not-registered') { childSnapshot.ref.remove() console.info('Was removed:', token) } else { console.info('Notification sent to', token) } } Deploy Firebase Functions

To upload your `index.js` to the cloud, we run the following command.

firebase deploy --only functions Conclusion

Now when you add a new post, the subscribed users will receive a push notification to lead them back to your blog.

GitHub Repo Demo Site

Article Series:
  1. Setting Up & Firebase
  2. The Back End (You are here)

Implementing Push Notifications: The Back End is a post from CSS-Tricks

Implementing Push Notifications: Setting Up & Firebase

Tue, 08/22/2017 - 14:20

You know those the little notification windows that pop up in the top right (Mac) or bottom right (Windows) corner when, for example, a new article on our favorite blog or a new video on YouTube was uploaded? Those are push notifications.

Part of the magic of these notifications is that they can appear even when we're not currently on that website to give us that information (after you've approved it). On mobile devices, where supported, you can even close the browser and still get them.

Article Series:
  1. Setting Up & Firebase (You are here!)
  2. The Back End (Coming soon!)
Push notification on a Mac in Chrome

A notification consists of the browser logo so the user knows from which software it comes from, a title, the website URL it was sent from, a short description, and a custom icon.

We are going to explore how to implement push notifications. Since it relies on Service Workers, check out these starting points if you are not familiar with it or the general functionality of the Push API:

What we are going to create Preview of the our push notification demo website

To test out our notifications system, we are going to create a page with:

  • a subscribe button
  • a form to add posts
  • a list of all the previously published posts

A repo on Github with the complete code can be found here and a preview of the project:

View Demo Site

And a video of it working:

Gathering all the tools

You are free to choose the back-end system which suits you best. I went with Firebase since it offers a special API which makes implementing a push notification service relatively easy.

We need:

In this part, we'll only focus on the front end, including the Service Worker and manifest, but to use Firebase, you will also need to register and create a new project.

Implementing Subscription Logic HTML

We have a button to subscribe which gets enabled if 'serviceWorker' in navigator. Below that, a simple form and a list of posts:

<button id="push-button" disabled>Subscribe</button> <form action="#"> <input id="input-title"> <label for="input-title">Post Title</label> <button type="submit" id="add-post">Add Post</button> </form> <ul id="list"></ul> Implementing Firebase

To make use of Firebase, we need to implement some scripts.

<script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-app.js"></script> <script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-database.js"></script> <script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-messaging.js"></script>

Now we can initialize Firebase using the credentials given under Project Settings → General. The sender ID can be found under Project Settings → Cloud Messaging. The settings are hidden behind the cog icon in the top left corner.

firebase.initializeApp({ apiKey: '<API KEY>', authDomain: '<PROJECT ID>.firebaseapp.com', databaseURL: 'https://<PROJECT ID>.firebaseio.com', projectId: '<PROJECT ID>', storageBucket: '<PROJECT ID>.appspot.com', messagingSenderId: '<SENDER ID>' }) Service Worker Registration

Firebase offers its own service worker setup by creating a file called `firebase-messaging-sw.js` which holds all the functionality to handle push notifications. But usually, you need your Service Worker to do more than just that. So with the useServiceWorker method we can tell Firebase to use our own `service-worker.js` file as well.

Now we can create a userToken and a isSubscribed variable which will be used later on.

const messaging = firebase.messaging(), database = firebase.database(), pushBtn = document.getElementById('push-button') let userToken = null, isSubscribed = false window.addEventListener('load', () => { if ('serviceWorker' in navigator) { navigator.serviceWorker.register('https://cdn.css-tricks.com/service-worker.js') .then(registration => { messaging.useServiceWorker(registration) initializePush() }) .catch(err => console.log('Service Worker Error', err)) } else { pushBtn.textContent = 'Push not supported.' } }) Initialize Push Setup

Notice the function initializePush() after the Service Worker registration. It checks if the current user is already subscribed by looking up a token in localStorage. If there is a token, it changes the button text and saves the token in a variable.

function initializePush() { userToken = localStorage.getItem('pushToken') isSubscribed = userToken !== null updateBtn() pushBtn.addEventListener('click', () => { pushBtn.disabled = true if (isSubscribed) return unsubscribeUser() return subscribeUser() }) }

Here we also handle the click event on the subscription button. We disable the button on click to avoid multiple triggers of it.

Update the Subscription Button

To reflect the current subscription state, we need to adjust the button's text and style. We can also check if the user did not allow push notifications when prompted.

function updateBtn() { if (Notification.permission === 'denied') { pushBtn.textContent = 'Subscription blocked' return } pushBtn.textContent = isSubscribed ? 'Unsubscribe' : 'Subscribe' pushBtn.disabled = false } Subscribe User

Let's say the user visits us for the first time in a modern browser, so he is not yet subscribed. Plus, Service Workers and Push API are supported. When he clicks the button, the subscribeUser() function is fired.

function subscribeUser() { messaging.requestPermission() .then(() => messaging.getToken()) .then(token => { updateSubscriptionOnServer(token) isSubscribed = true userToken = token localStorage.setItem('pushToken', token) updateBtn() }) .catch(err => console.log('Denied', err)) }

Here we ask permission to send push notifications to the user by writing messaging.requestPermission().

The browser asking permission to send push notifications.

If the user blocks this request, the button is adjusted the way we implemented it in the updateBtn() function. If the user allows this request, a new token is generated, saved in a variable as well as in localStorage. The token is being saved in our database by updateSubscriptionOnServer().

Save Subscription in our Database

If the user was already subscribed, we target the right database reference where we saved the tokens (in this case device_ids), look for the token the user already has provided before, and remove it.

Otherwise, we want to save the token. With .once('value'), we receive the key values and can check if the token is already there. This serves as second protection to the lookup in localStorage in initializePush() since the token might get deleted from there due to various reasons. We don't want the user to receive multiple notifications with the same content.

function updateSubscriptionOnServer(token) { if (isSubscribed) { return database.ref('device_ids') .equalTo(token) .on('child_added', snapshot => snapshot.ref.remove()) } database.ref('device_ids').once('value') .then(snapshots => { let deviceExists = false snapshots.forEach(childSnapshot => { if (childSnapshot.val() === token) { deviceExists = true return console.log('Device already registered.'); } }) if (!deviceExists) { console.log('Device subscribed'); return database.ref('device_ids').push(token) } }) } Unsubscribe User

If the user clicks the button after subscribing again, their token gets deleted. We reset our userToken and isSubscribed variables as well as remove the token from localStorage and update our button again.

function unsubscribeUser() { messaging.deleteToken(userToken) .then(() => { updateSubscriptionOnServer(userToken) isSubscribed = false userToken = null localStorage.removeItem('pushToken') updateBtn() }) .catch(err => console.log('Error unsubscribing', err)) }

To let the Service Worker know we use Firebase, we import the scripts into `service-worker.js` before anything else.

importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-app.js') importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-database.js') importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-messaging.js')

We need to initialize Firebase again since the Service Worker cannot access the data inside our `main.js` file.

firebase.initializeApp({ apiKey: "<API KEY>", authDomain: "<PROJECT ID>.firebaseapp.com", databaseURL: "https://<PROJECT ID>.firebaseio.com", projectId: "<PROJECT ID>", storageBucket: "<PROJECT ID>.appspot.com", messagingSenderId: "<SENDER ID>" })

Below that we add all events around handling the notification window. In this example, we close the notification and open a website after clicking on it.

self.addEventListener('notificationclick', event => { event.notification.close() event.waitUntil( self.clients.openWindow('https://artofmyself.com') ) })

Another example would be synchronizing data in the background. Read Google's article about that.

Show Messages when on Site

When we are subscribed to notifications of new posts but are already visiting the blog at the same moment a new post is published, we don't receive a notification.

A way to solve this is by showing a different kind of message on the site itself like a little snackbar at the bottom.

To intercept the payload of the message, we call the onMessage method on Firebase Messaging.

The styling in this example uses Material Design Lite.

<div id="snackbar" class="mdl-js-snackbar mdl-snackbar"> <div class="mdl-snackbar__text"></div> <button class="mdl-snackbar__action" type="button"></button> </div> import 'material-design-lite' messaging.onMessage(payload => { const snackbarContainer = document.querySelector('#snackbar') let data = { message: payload.notification.title, timeout: 5000, actionHandler() { location.reload() }, actionText: 'Reload' } snackbarContainer.MaterialSnackbar.showSnackbar(data) }) Adding a Manifest

The last step for this part of the series is adding the Google Cloud Messaging Sender ID to the `manifest.json` file. This ID makes sure Firebase is allowed to send messages to our app. If you don't already have a manifest, create one and add the following. Do not change the value.

{ "gcm_sender_id": "103953800507" }

Now we are all set up on the front end. What's left is creating our actual database and the functions to watch database changes in the next article.

Article Series:
  1. Setting Up & Firebase (You are here!)
  2. The Back End (Coming soon!)

Implementing Push Notifications: Setting Up & Firebase is a post from CSS-Tricks

Be Slightly Careful with Sub Elements of Clickable Things

Tue, 08/22/2017 - 13:02

Say you want to attach a click handler to a <button>. You almost surely are, as outside of a <form>, buttons don't do anything without JavaScript. So you do that with something like this:

var button = document.querySelector("button"); button.addEventListener("click", function(e) { // button was clicked });

But that doesn't use event delegation at all.

Event delegation is where you bind the click handler not directly to the element itself, but to an element higher up the DOM tree. The idea being that you can rip out and plop in new DOM stuff inside of there and not worry about events being destroyed and needing to re-bind them.

Say our button has a gear icon in it:

<button> <svg> <use xlink:href="#gear"></use> </svg> </button>

And we bind it by watching for clicks way up on the document element itself:

document.documentElement.addEventListener("click", function(e) { });

How do we know if that click happened on the button or not? We have the target of the event for that:

document.documentElement.addEventListener("click", function(e) { console.log(e.target); });

This is where it gets tricky. In this example, even if the user clicks right on the button somewhere, depending on exactly where they click, e.target could be:

  • The button element
  • The svg element
  • The use element

So if you were hoping to be able to do something like this:

document.documentElement.addEventListener("click", function(e) { if (e.target.tagName === "BUTTON") { // may not work, because might be svg or use } });

Unfortunately, it's not going to be that easy. It doesn't matter if you check for classname or ID or whatever else, the element itself that you are expecting might just be wrong.

There is a pretty decent CSS fix for this... If we make sure nothing within the button has pointer-events, clicks inside the button will always be for the button itself:

button > * { pointer-events: none; }

This also prevents a situation where other JavaScript has prevented the event from bubbling up to the button itself (or higher).

document.querySelector("button > svg").addEventListener("click", function(e) { e.stopPropagation(); e.preventDefault(); }); document.querySelector("button").addEventListener("click", function() { // If the user clicked right on the SVG, // this will never fire });

Be Slightly Careful with Sub Elements of Clickable Things is a post from CSS-Tricks

Strongly Held Opinions, Gone Away

Mon, 08/21/2017 - 21:29

I received a really wonderful question from Bryan Braun the other day during a workshop I was giving at Sparkbox. He asked if, over the years, if there were opinions about web design and development I strongly held that I don't anymore.

I really didn't have a great answer at the time, even though surely if I could rewind my brain there would be some embarrassing ones in there.

At the risk of some heavy self-back-patting, this is exactly the reason I try and be pretty open-minded. If you aren't, you end up eating crow. And for what? When you crap on an idea, you sound like a jerk at the time, and likely cause more harm than good. If you end up right, you were still a jerk. If you end up wrong, you were a jerk and a fool.

I like the sentiment the web is a big place. It's a quick way of saying there are no hard and fast right answers in a playground this big with loose rules, diversity of everything, and economic overlords.

I don't want to completely punt on this question though.

I've heard Trent Walton say a number of times that, despite him being all-in on Responsive Web Design now, at first it seemed like a very bad idea to him.

I remember feeling very late to the CSS preprocessing world, because I spent years rolling my eyes at it. I thought it the result of back end nerds sticking their noses into something and bringing programming somewhere that didn't need it. Looking back, it was mostly me being afraid to learn the tools needed to bring it into a workflow.

It's not to find industry-wide holy wars these days, where strongly held opinions duke it over time, and probably end up giving ground to each other in the end.

But what of those internal personal battles? I'd be very interested to hear people's answers on this...

What strongly-help opinion did you used to have about web design and development, but not anymore?

Strongly Held Opinions, Gone Away is a post from CSS-Tricks

Double Opt-In Email Intros

Mon, 08/21/2017 - 14:19

You know those those "introduction" emails? Someone thinks you should meet someone else, and emails happen about it. Or it's you doing the introducing, either by request or because you think it's a good idea. Cutting to the chase here, those emails could be done better. Eight years ago, Fred Wilson coined the term "double opt-in intro".

This is how it can work.

You're doing the vetting

Since you're writing the emails here, it's your reputation at stake here. If you do an introduction that is obnoxious for either side, they'll remember. Make sure you're introducing people that you really do think should know each other. Like a bizdev cupid.

You're gonna do two (or three) times writing

The bad way to do an intro is to email both people at once. Even if this introduction has passed your vetting, you have no idea how it's going to turn out. There is a decent chance either of them or both aren't particularly interested in this, which makes you look like a dolt. It doesn't respect either of their time, puts your reputation at risk, and immediately puts everyone into an awkward position (if they ignore it they look like an asshole).

Instead, you're going to write two emails, one to each person you're trying to introduce. And you're not going to reveal who the other person is, except with non-identifying relevant details and your endorsement.

They do the opt-ing in

If either of the folks are interested in this introduction, they can email you back. Give them an easy out though, I'd say something like "if for any reason you aren't into it, just tell me so or ignore this, I promise I understand". If you don't make it easy to blow you off, it's your just transferring the awkward situation to yourself.

If either of them isn't into it, it doesn't matter. They don't know who the other is and there is no awkwardness or burnt bridge.

If both are into it, great, now it's time for the third email actually introducing them. Get out of the way quickly.

It's about more than awkwardness and reputation, it's about saftey

See:

It's also why double opt-in intros are *a must*. Please please please don't go intro'ing people to each other without asking first.

— Lara Hogan (@lara_hogan) August 5, 2017

Just because you have someone's email address in your book doesn't mean you should be giving it out to anyone that asks. Better to just assume any contact info you have for someone else is extremely private and only to be shared with their permission.

Double Opt-In Email Intros is a post from CSS-Tricks

Pattern Library Workflow

Fri, 08/18/2017 - 15:55

Jon Gunnison documents some things that have made pattern libraries successful at Allstate. Tidbits I found interesting:

  • There are specific jobs (part of what he calls "governance") for maintaining the library. I love that they are called librarians. A "designer librarian" and a "UI dev librarian".
  • Acknowledgment that there are "snowflakes", or single instances that don't fit into a pattern (at least right now).
  • The pattern library is fed by information that comes in from lots of different places. Hence, requiring librarians to triage.

Direct Link to ArticlePermalink

Pattern Library Workflow is a post from CSS-Tricks

Using Custom Properties to Modify Components

Fri, 08/18/2017 - 15:07

Instead of using custom properties to style whole portions of a website’s interface I think we should use them to customize and modify tiny components. Here’s why.

Whenever anyone mentions CSS custom properties they often talk about the ability to theme a website’s interface in one fell swoop. For example, if you’re working at somewhere like a big news org then we might want to specify a distinct visual design for the Finance section and the Sports section – buttons, headers, pull quotes and text color could all change on the fly.

Custom properties would make this sort of theming easy because we won’t have to add a whole bunch of classes to each component. All we’d have to do is edit a single variable that’s in the :root, plus we can then edit those custom props with JavaScript which is something we can’t do with something like Sass variables.

A while back Chris wrote about this use case in a post about custom properties and theming and the example he gave looked like this:

:root { --mainColor: #5eb5ff; } header { background: var(--mainColor); } footer { background: var(--mainColor); }

See the Pen Theming a site with CSS Custom Properties by Chris Coyier (@chriscoyier) on CodePen.

But the more I learn about building big ol’ systems with CSS, the more I think that changing global styles like this is really difficult to keep the code clean and consistent over the long haul. And if you’re working on any large web app then you’re probably using something like React where everything is made of tiny reusable components anyway, especially because at this scale the cascade can be scary and dangerous.

If we’re working on larger, more complex systems then how should we be using custom properties then? Well I think the best option is to keep them on the component level to help make our CSS really clean. So instead of adding variables to the root element we could bind them to the component instead, like this:

.btn { --btnColor: #5eb5ff; }

After which we could set properties such as color or border to use this variable:

.btn { --btnColor: #5eb5ff; border: 1px solid var(--btnColor); color: var(--btnColor); &:hover { color: white; background-color: var(--btnColor); } }

So far so good! We can then add modifier classes that simply change the value of the custom property:

.btn-red { --btnColor: #ff6969; } .btn-green { --btnColor: #7ae07a; } .btn-gray { --btnColor: #555; }

See the Pen Custom Properties by Robin Rendle (@robinrendle) on CodePen.

See how nice and tidy that is? With just a few lines of CSS we’ve created a whole system of buttons – we could easily change the font-size or add animations or anything else and keep our classes nice and small without messing with the global scope of our CSS. Especially since all this code is likely to live in a single file like buttons.scss it’s helpful that all the logic exists in one place.

Anyway, for sure this method of using custom properties on a component level isn’t as exciting or stylish as using a single variable to style every part of a website but I’m not entirely sure how useful that sort of theming is anyway. A lot of the time a design will require a bunch of tiny adjustments to each component for usability reasons so it makes sense to break everything down to the component level.

What do you think is the most useful thing about custom properties? I’d love to hear what everyone thinks about this stuff in the comments below.

Using Custom Properties to Modify Components is a post from CSS-Tricks

Saving SVG with Space Around It from Illustrator

Fri, 08/18/2017 - 14:43

Say you have a graphic like this in Adobe Illustrator:

Note how the art doesn't touch the edges of the artboard. Say you want that space around it, and you want to save it as SVG for use on the web.

Nope: Save for Web

THE CLAW! You'll see space around here, but unfortunately the classic Save for Web dialog doesn't export as SVG at all, so that's not really an option.

They are already calling this a "legacy" feature, so I imagine it'll be gone soon.

Nope: Export As

The "Export As" feature supports SVG, and you'll likely be pretty pleased with the output. It's fairly optimized, cruft-free, and pretty much ready to use on the web.

But... it crops to the art with no option to change that, so we'll lose the space around that we're shooting for here.

A possible work around here is putting a rectangle behind the art with the spacing around it we need, but then we get a rectangle in the output, which shouldn't be necessary.

Nope: Asset Export

The Asset Export panel is mighty handy, but you the export crops to the art and there is no way to change that.

Yep: Export for Screens

The trick in preserving the space is to export the artboard itself. You can do that from the Export for Screens dialog.

The viewBox will then reflect the artboard and the space we have left around the art. That's what we were aiming for, so I'm glad there is a way!

Saving SVG with Space Around It from Illustrator is a post from CSS-Tricks

Visual Email Builder Apps

Thu, 08/17/2017 - 11:33

I bet y'all know that apps like Campaign Monitor and MailChimp have visual email builders built right into them. You drag and drop different types of content right into a layout. You edit text right within the email. It's nice. It's a lot nicer than editing the quagmire of HTML underneath, anyway!

But not everybody needs all the rest of the features that those apps bring, like list management and the actual sending of the email. Perhaps you have an app that already handles that kind of thing. You just need to design some emails, get the final HTML, and use it in your own app.

When I was looking around at email tooling, I was surprised there were a good number of apps that help just with the visual email building. Very cool.

Toptol BEE free EDMdesigner RED (Responsive Email Designer) Taxi for Email

I haven't used any of them extensively enough to make a firm recommendation, but I've been dabbling and I like that they exist and that there are options.

Visual Email Builder Apps is a post from CSS-Tricks

Oxygen – The WordPress Visual Site Builder for Real Designers?

Thu, 08/17/2017 - 10:55

WordPress page builders are generally shunned by those who know how to code. They are generally bloated and slow. And you are offered very limited customization options. But what if there was a visual site builder meant for advanced, professional website designers?

It turns out there is! It's called Oxygen, and it's quickly becoming the tool of choice for WordPress web designers.

Notice that with Oxygen, you design your entire site - content, headers, footers, menus, etc. It totally replaces your WordPress theme.

All pages are constructed from fundamental HTML elements - section, div, h1...6, p, span, a, img, and a few more. Then, you visually edit CSS properties to get everything looking the way you want.

So unlike a typical page builder, you can design anything. It's like hand-coding, but visually. Think Webflow, but for WordPress.

To integrate with WordPress and design layouts for posts, custom post types, archives, etc. Oxygen has a robust templating system. Basically, it replaces the WordPress template hierarchy with a visual system to apply templates.

Then you can write PHP code inside Oxygen's interface and call WP API functions, run the WordPress loop, etc.

There are really no limits to what you can do with Oxygen. It is far and ahead more powerful than any other WordPress page building tool available. Other than hand-coding your WordPress theme, there's nothing that I've ever seen gives you flexibility like this.

This might be the future, so check it out! You will be pleasantly surprised.

Direct Link to ArticlePermalink

Oxygen – The WordPress Visual Site Builder for Real Designers? is a post from CSS-Tricks

Using the Paint Timing API

Wed, 08/16/2017 - 12:56

It's a great time to be a web performance aficionado, and the arrival of the Paint Timing API in Chrome 60 is proof positive of that fact. The Paint Timing API is yet another addition to the burgeoning Performance API, but instead of capturing page and resource timings, this new and experimental API allows you to capture metrics on when a page begins painting.

If you haven't experimented with any of the various performance APIs, it may help if you brush up a bit on them, as the syntax of this API has much in common with those APIs (particularly the Resource Timing API). That said, you can read on and get something out of this article even if you don't. Before we dive in, however, let's talk about painting and the specific timings this API collects.

Why do we need an API for measuring paint times?

If you're reading this, you're likely familiar with what painting is. If not, it's a simple concept to grasp: Painting is any activity by the browser that involves drawing pixels to the browser window. It's a crucial part of the rendering pipeline. When we talk about painting in performance parlance, we're often referring to the time at which the browser begins to paint a page as it loads. This moment is appropriately called "time to first paint".

Why is this metric important to know? Because it signifies to us the earliest possible point at which something appears after a user requests a page. A lot goes on as a page is loading, but one thing we know is that the sooner we can get something to appear for the user, the sooner they'll realize that something is happening. Sort of like your LDL cholesterol, most performance-oriented goals involve lowering your numbers. Until you know what your numbers are to begin with, though, reaching those goals can be an exercise in futility.

Thankfully, this is where the Paint Timing API can help us out. This API allows you to capture how fast a page is painting for your site's visitors using JavaScript. Synthetic testing in programs such as Lighthouse or sitespeed.io is great in that it gives us a baseline to work with for improving the performance of sites in our care, but all of that testing is in a vacuum. It doesn't tell you how your site is performing for those who actually use it.

Compared to similar performance APIs, the Paint Timing API is much more simplified. It provides us with only two metrics:

first-paint: This is likely what you think it is. The point at whicch the browser has painted the first pixel on the page. It may look something like this:

What `first-paint` might look like.

first-contentful-paint: This is a bit different than first-paint in that it captures the time at which the first bit of content is painted, be it text, an image, or whatever isn't some variation of non-contentful styling. That scenario may look something like this:

What a `first-contentful-paint` event might look like.

It's important to point out that these two points in time may not always be so distinct from one another. Depending on the client-side architecture of a given website, first-paint and first-contentful-paint metrics may not differ. Where faster and lighter web experiences are concerned, they'll often be nearly (or even exactly) the same thing. On larger sites where client side architecture involves a lot of assets (and/or when connections are slower), these two metrics may occur further apart.

In any case, let's get an eye on how to use this API, which has landed in Chrome 60.

A straightforward use case

There are a couple ways you can use this API. The easiest way is to attach the code to an event that occurs some time after the first paint. The reason you might want to attach this to an event instead of running it immediately is so the metrics are actually available when you attempt to pull them from the API. Take this code for example:

if("performance" in window){ window.addEventListener("load", ()=>{ let paintMetrics = performance.getEntriesByType("paint"); if(paintMetrics !== undefined && paintMetrics.length > 0){ paintMetrics.forEach((paintMetric)=>{ console.log(`${paintMetric.name}: ${paintMetric.startTime}`); }); } }); }

This code does the following:

  1. We do a simple check to see if the performance object is in the window object. This prevents any of our code from running if performance is unavailable.
  2. We attach code using addEventListener to the window object's load event, which will fire when the page and its assets are fully loaded.
  3. In the load event code, we use the performance object's getEntriesByType method to retrieve all event types of "paint" to a variable called paintMetrics.
  4. Because only Chrome 60 (and later) currently implements the paint timing API, we need to check if any entries were returned. To do this, we check if paintMetrics is undefined and if its length is greater than 0.
  5. If we've made it past those checks, we then output the name of the metric and its start time to the console, which will look something like this:
Paint timings exposed in the console.

The timings you see in the console screenshot above are in milliseconds. From here, you can send these metrics someplace to be stored and analyzed for later.

This works great and all, but what if we want to have access to these metrics as soon as the browser collects them? For that, we'll need PerformanceObserver.

Capturing paint metrics with PerformanceObserver

If you absolutely, positively need to access timings as soon as they're available in the browser, you can use PerformanceObserver. Using PerformanceObserver can be tricky, especially if you want to make sure you're not breaking behavior for browsers that don't support it, or if browsers do support it, but don't support "paint" events. This latter scenario is pertinent to our efforts here because polling for unsupported events can throw a TypeError.

Because PerformanceObserver gathers metrics and logs them asynchronously, our best bet is to use a promise, which helps us handle async'y stuff without the callback hell of yesteryear. Take this code, for example:

if("PerformanceObserver" in window){ let observerPromise = new Promise((resolve, reject)=>{ let observer = new PerformanceObserver((list)=>{ resolve(list); }); observer.observe({ entryTypes: ["paint"] }); }).then((list)=>{ list.getEntries().forEach((entry)=>{ console.log(`${entry.name}: ${entry.startTime}`); }); }).catch((error)=>{ console.warn(error); }); }

Let's walk through this code:

  1. We check for the existence of the PerformanceObserver object in window. If PerformanceObserver doesn't exist, nothing happens.
  2. A Promise is created. In the first part of the promise chain, we create a new PerformanceObserver object and store it in the observer variable. This observer contains a callback that resolves the promise with a list of paint timings.
  3. We have to get those paint timings from somewhere, right? That's where the observer method kicks in. This method lets us define what types of performance entries we want. Since we want painting events, we just pass in an array with an entry type of "paint".
  4. If the browser supports gathering "paint" events with PerformanceObserver, the promise will resolve and the next part of the chain kicks in where we then have access to the entries through the list variable's getEntries method. This will produce console output much like the previous example.
  5. If the current browser doesn't support gathering "paint" events with PerformanceObserver, the catch method provides access to the error message. From here, we can do whatever we want with this information.

Now you have a way to gather metrics asynchronously, instead of having to wait for the page to load. I personally prefer the previous method, as the code is more terse and readable (to me, anyway). I'm sure my methods aren't the most robust, but they are illustrative of the fact that you can gather paint timings in the browser in a predictable way that shouldn't throw errors in older browsers.

What would I use this for?

Depends on what you're after. Maybe you want to see just how fast your site is rendering for real users out in the wild. Maybe you want to gather data for research. At the time of writing, I'm conducting a image quality research project that gauges participants on how they perceive lossy image quality of JPEGs and WebP images. As part of my research, I use other timing APIs to gather performance-related information, but I'm also gathering paint timings. I don't know if this data will prove useful, but collecting and analyzing it in tandem with other metrics may be helpful to my findings. However, you use this data is really up to you. In my humble opinion, I think it's great that this API exists, and I hope more browsers move to implement it soon.

Some other stuff you might want to read

Reading this short piece might have gotten you interested in some other pieces of the broader performance interface. Here's a few articles for you to check out if your curiosity has been sufficiently piqued:

  • The surface of this API is shared with the established Resource Timing API, so you should brush up on that. If you feel comfortable with the code in the article, you'll be able to immediately benefit from this incredibly valuable API.
  • While this API doesn't share much of a surface with the Navigation Timing API, you really ought to read up on it. This API allows you to collect timing data on how fast the HTML itself is loading.
  • PerformanceObserver has a whole lot more to it than what I've illustrated here. You can use it to get resource timings and user timings. Read up on it here.
  • Speaking of user timings, there's an API for that. With this API, you can measure how long specific JavaScript tasks are taking using highly accurate timestamps. You could also use this tool to measure latency in how users interact with the page.

Now that you've gotten your hands dirty with this API, head out and see what it (and other APIs) can do for you in your quest to make the web faster for users!

Jeremy Wagner is the author of Web Performance in Action, available now from Manning Publications. Use promo code sswagner to save 42%.

Check him out on Twitter: @malchata

Using the Paint Timing API is a post from CSS-Tricks

A Poll About Pattern Libraries and Hiring

Tue, 08/15/2017 - 13:49

I was asked (by this fella on Twitter) a question about design patterns. It has an interesting twist though, related to hiring, which I hope makes for a good poll.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

I'll let this run for a week or two. Then (probably) instead of writing a new post with the results, I'll update this one with the results. Feel free to comment with the reasoning for your vote.

A Poll About Pattern Libraries and Hiring is a post from CSS-Tricks

(An Interview About) imgix Page Weight

Tue, 08/15/2017 - 13:41

Imgix has been a long-time display ad sponsor here on CSS-Tricks. This post is not technically sponsored, I just noticed that they released a tool for analyzing image performance at any given URL that is pretty interesting.

We know web performance is a big deal. We know that images are perhaps the largest offender in ballooning page weights across the web. We know we have tools for looking at page performance as a whole. It seems fairly new to me to have tools for specifically analyzing and demonstrating how we could have done better with images specifically. That's what this Page Weight tool is.

Clearly this is a marketing tool for them. You put in a URL, and it tells you how you could have done better, and specifically how imgix can help do that. I'm generally a fan of that. Tools with businesses behind them have the resources and motivation to stick around and get better. But as ever, something to be aware of.

I asked Brit Morgan some questions about it.

As we can see checking out the homepage for Page Weight, you drop in a URL, it analyzes all the images and gives you some information about how much more performant they could have been. What's going on behind the scenes there?

We run each image on the page through imgix, resizing to fit the image container as best we can tell, and also transform file formats, color palettes, and quality breakpoints to determine which combination provides the best size savings. Then we display that version for each image.

I see it suggests fitting the image to the container, but that only makes sense for 1x displays right? Images need to be larger than their display pixel size for visual crispness on high-density display.

Definitely. The Page Weight tool does not currently address high-DPR display differences, but our service does. We offer automated high-DPR support via Client Hints, and manually via our dpr parameter, which allows developers to set the desired value directly (useful on its own or as a fallback for Client Hint support in browsers that don't yet support that functionality). Our imgix.js front-end library also generates a comprehensive srcset (based on the defined sizes) to address whatever size/DPR the device requires.

I think most developers here are smart enough to realize this is really smart marketing for imgix. But also smart enough to realize the images are a huge deal in web performance and want to do better. What can imgix do that a developer on their own can't do? Or that is fairly impractical for a developer to do on their own?

First, it is important to note that resizing is not the only thing that imgix does, although it is a very common use case. We provide over 100 different processing parameters that enable developers to do everything from context-aware cropping to color space handling to image compositing. So adopting imgix gives a developer access to a lot of image handling flexibility without a lot of hassle, even if they’re primarily using it to support responsive design.

That said, it is not impossible to get a very simple resizing solution running on your own, and many developers start out there. Usually, this takes the form of some kind of batch script based on ImageMagick or Pillow or some other image manipulation library that creates derivative images for the different breakpoints.

For a while, that's often sufficient. But once your image library gets beyond a few hundred images, batch-based systems begin to break down in various ways. Visibility, error handling, image catalog cleaning, and adding support for new formats and devices are all things that get much harder at scale. Very large sites and sites where content changes constantly will often end up spending significant dev time on these kinds of maintenance tasks.

So really, "could you build this?" is a less useful question than "should you build this?" In other words, is image processing central enough to the value proposition of what you're building that you're willing to spend time and effort maintaining your own system to handle it? Usually, the answer is no. Most developers would rather focus on the what's important and leave images to something like imgix — a robust, scaleable system that just works.

Does the tool look at responsive images syntax in HTML? As in, which image was actually downloaded according to the srcset/sizes or picture element rules?

Not yet. That's a feature we're hoping to implement in the next version of the tool.

Can you share implementations of imgix that are particularly impressive or creative?

An interesting use we see more and more is image processing for social media. These days, many sites see the majority of their traffic coming in through social, which makes it more important than ever to make content look good in the feed. Setting OpenGraph tags is a start, but every social network has a different container size. This creates a similar problem to the one posed by mobile fragmentation, and we can help by dynamically generating social images for each network. This provides a polished presentation without adding a ton of overhead for the person maintaining the site.

Other customers are pushing even further by combining several images to create a custom presentation for social. HomeChef, a meal delivery service, does this to dynamically create polished, branded images for Pinterest from their ingredient photos.

We actually created an open source tool called Motif (GitHub Repo) to make it easier for developers to get started with dynamically generating social images through imgix.

(An Interview About) imgix Page Weight is a post from CSS-Tricks

Using ES2017 Async Functions

Mon, 08/14/2017 - 12:23

ES2017 was finalized in June, and with it came wide support for my new favorite JavaScript feature: async functions! If you've ever struggled with reasoning about asynchronous JavaScript, this is for you. If you haven't, then, well, you're probably a super-genius.

Async functions more or less let you write sequenced JavaScript code, without wrapping all your logic in callbacks, generators, or promises. Consider this:

function logger() { let data = fetch('http://sampleapi.com/posts') console.log(data) } logger()

This code doesn't do what you expect. If you've built anything in JS, you probably know why.

But this code does do what you'd expect.

async function logger() { let data = await fetch('http:sampleapi.com/posts') console.log(data) } logger()

That intuitive (and pretty) code works, and its only two additional words!

Async JavaScript before ES6

Before we dive into async and await, it's important that you understand promises. And to appreciate promises, we need go back one more step to just plain ol' callbacks.

Promises were introduced in ES6, and made great improvements to writing asynchronous code in JavaScript. No more "callback hell", as it is sometimes affectionately referred to.

A callback is a function that can be passed into a function and called within that function as a response to any event. It's fundamental to JS.

function readFile('file.txt', (data) => { // This is inside the callback function console.log(data) }

That function is simply logging the data from a file, which isn't possible until the file is finished being read. It seems simple, but what if you wanted to read and log five different files in sequence?

Before promises, in order to execute sequential tasks, you would need to nest callbacks, like so:

// This is officially callback hell function combineFiles(file1, file2, file3, printFileCallBack) { let newFileText = '' readFile(string1, (text) => { newFileText += text readFile(string2, (text) => { newFileText += text readFile(string3, (text) => { newFileText += text printFileCallBack(newFileText) } } } }

It hard to reason about and difficult to follow. This doesn't even include error handling for the entirely possible scenario that one of the files doesn't exist.

I Promise it gets better (get it?!)

This is where a Promise can help. A Promise is a way to reason about data that doesn't yet exist, but you know it will. Kyle Simpson, author of You Don't Know JS series, is well known for giving async JavaScript talks. His explanation of promises from this talk is spot on: It's like ordering food a fast-food restaurant.

  1. Order your food.
  2. Pay for your food and receive a ticket with an order number.
  3. Wait for your food.
  4. When your food is ready, they call your ticket number.
  5. Receive the food.

As he points out, you may not be able to eat your food while you're waiting for it, but you can think about it, and you can prepare for it. You can proceed with your day knowing that food is going to come, even if you don't have it yet, because the food has been "promised" to you. That's all a Promise is. An object that represents data that will eventually exist.

readFile(file1) .then((file1-data) => { /* do something */ }) .then((previous-promise-data) => { /* do the next thing */ }) .catch( /* handle errors */ )

That's the promise syntax. Its main benefit is that it allows an intuitive way to chain together sequential events. This basic example is alright, but you can see that we're still using callbacks. Promises are just thin wrappers on callbacks that make it a bit more intuitive.

The (new) Best Way: Async / Await

A couple years ago, async functions made their way into the JavaScript ecosystem. As of last month, its an official feature of the language and widely supported.

The async and await keywords are a thin wrapper built on promises and generators. Essentially, it allows us to "pause" our function anywhere we want, using the await keyword.

async function logger() { // pause until fetch returns let data = await fetch('http://sampleapi.com/posts') console.log(data) }

This code runs and does what you'd want. It logs the data from the API call. If your brain didn't just explode, I don't know how to please you.

The benefit to this is that it's intuitive. You write code the way your brain thinks about it, telling the script to pause where it needs to.

The other advantages are that you can use try and catch in a way that we couldn't with promises:

async function logger () { try { let user_id = await fetch('/api/users/username') let posts = await fetch('/api/`${user_id}`') let object = JSON.parse(user.posts.toString()) console.log(posts) } catch (error) { console.error('Error:', error) } }

This is a contrived example, but it proves a point: catch will catch the error that occurs in any step during the process. There are at least 3 places that the try block could fail, making this by far the cleanest way to handle errors in async code.

We can also use async functions with loops and conditionals without much of a headache:

async function count() { let counter = 1 for (let i = 0; i < 100; i++) { counter += 1 console.log(counter) await sleep(1000) } }

This is a silly example, but that will run how you'd expect and it's easy to read. If you run this in the console, you'll see that the code will pause on the sleep call, and the next loop iteration won't start for one second.

The Nitty Gritty

Now that you're convinced of the beauty of async and await, lets dive into the details:

  • async and await are built on promises. A function that uses async will always itself return a promise. This is important to keep in mind, and probably the biggest "gotcha" you'll run into.
  • When we await, it pauses the function, not the entire code.
  • async and await are non-blocking.
  • You can still use Promise helpers such as Promise.all(). Here's our earlier example: async function logPosts () { try { let user_id = await fetch('/api/users/username') let post_ids = await fetch('/api/posts/<code>${user_id}') let promises = post_ids.map(post_id => { return fetch('/api/posts/${post_id}') } let posts = await Promise.all(promises) console.log(posts) } catch (error) { console.error('Error:', error) } }
  • Await can only be used in functions that have been declared Async.
  • Therefore, you can't use await in the global scope. // throws an error function logger (callBack) { console.log(await callBack) } // works! async function logger () { console.log(await callBack) }
Available now!

The async and await keywords are available in almost every browser as of June 2017. Even better, to ensure your code works everywhere, use Babel to preprocess your JavaScript into and older syntax that older browsers do support.

If you're interested in more of what ES2017 has to offer, you can see a full list of ES2017 features here.

Using ES2017 Async Functions is a post from CSS-Tricks

Long Distance

Mon, 08/14/2017 - 12:23

A podcast (turns out to be a 2-parter) from Reply All in which Alex Goldman gets a scam phone call about his iCloud account being compromised. He goes pretty far into investigating it, speaking regularly with the people who run these scams.

Especially resonant for me, as someone who also spoke directly with a hacker who's goal was doing me harm. I've long struggled with thinking rationally about stuff like this.

Direct Link to ArticlePermalink

Long Distance is a post from CSS-Tricks

Crafting Webfont Fallbacks

Sun, 08/13/2017 - 23:25

There is a great bit in here where Glen uses Font Style Matcher to create some CSS for a fallback font that has font-size, line-height, font-weight, letter-spacing, and word-spacing adjusted so perfectly that when the web font does load, the page hardly shifts at all. Like barely noticeable FOUT. Maybe we'll call it FOCST (Flash of Carefully Styled Text).

Direct Link to ArticlePermalink

Crafting Webfont Fallbacks is a post from CSS-Tricks

Pages