CSS-Tricks

Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 4 hours 20 min ago

​140 Free Stock Videos With Videoblocks

Thu, 12/21/2017 - 15:58

(This is a sponsored post.)

Videoblocks is exploding with over 115,000 stock videos, After Effects templates, motion backgrounds and more! With its user-friendly site, massive library to choose from, and fresh new content, there’s no stopping what you can do. All the content is 100% free from any royalties. Anything you download is yours to keep and use forever! Right now, you can get 7 days of free downloads. Get up to 20 videos every day for 7 days. That's 140 downloads free over the course of the 7 days. Click on over and see where your imagination takes you!

Start Downloading Now

Direct Link to ArticlePermalink

​140 Free Stock Videos With Videoblocks is a post from CSS-Tricks

Turn that frown upside down

Wed, 12/20/2017 - 19:55

I got an email that went like this (lightly edited for readability):

CSS makes me sad.

I've been programming web apps for more than a decade now. I can architect the thing, load every required data, make all the hops and jumps until I have a perfectly crafted piece of markup with relevant info.

And then I need to put a box to the left of another box. Or add a scrollbar because a list is too big. Or, god forbid, center some text.

I waste hours and feel worthless and sad. This only happens with CSS.

I think this is a matter of practice. I bet you practice all of the other technologies involved in building the sites you work on more than you practice CSS. If it's any consolation, there are loads of developers out there who feel exactly the opposite. Designing, styling, and doing web layout are easy to them compared to architecting data.

I have my doubts that CSS is inherently bad and poorly designed such that incredibly intelligent people can't handle it. If there was some way to measure it, I might put my money on CSS being one of the easier languages to get good at, given equal amounts of practice time.

In fact, Eric Meyer recently published a CSS: The Definitive Guide, 4th Edition, which is more than twice as thick as the original version, yet says:

CSS has a great deal more capabilities than ever before, it’s true. In the sense of “how much there potentially is to know”, yes, CSS is more of a challenge.

But the core principles and mechanisms are no more complicated than they were a decade or even two decades ago. If anything, they’re easier to grasp now, because we don’t have to clutter our minds with float behaviors or inline layout just to try to lay out a page.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "1449393195"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "26ac1508fd6ec7043cb51eb46b883858";

One way to digest that might be: if you feel snakebitten by past CSS, it's time to try it again because it's gotten more capable and, dare I say, easier.

We can also take your specifics one-by-one:

And then I need to put a box to the left of another box.

Try flexbox!

See the Pen GyZMrj by Chris Coyier (@chriscoyier) on CodePen.

Or add a scrollbar because a list is too big.

Or, god forbid, center some text.

The overflow property is great for handling scrollbar stuff. You can even style them. And we have a whole guide on centering! Here's a two-fer:

See the Pen Centered Scrolling List by Chris Coyier (@chriscoyier) on CodePen.

Best of luck!

Turn that frown upside down is a post from CSS-Tricks

Breaking Down the Performance API

Wed, 12/20/2017 - 14:32

JavaScript’s Performance API is prudent, because it hands over tools to accurately measure the performance of Web pages, which, in spite of being performed since long before, never really became easy or precise enough.

That said, it isn’t as easy to get started with the API as it is to actually use it. Although I’ve seen extensions of it covered here and there in other posts, the big picture that ties everything together is hard to find.

One look at any document explaining the global performance interface (the access point for the Performance API) and you’ll be bombarded with a slew of other specifications, including High Resolution Time API, Performance Timeline API and the Navigation API among what feels like many, many others. It's enough to make the overarching concept more than a little confusing as to what exactly the API is measuring but, more importantly, make it easy to overlook the specific goodies that we get with it.

Here's an illustration of how all these pieces fit together. This can be super confusing, so having a visual can help clarify what we're talking about.

The Performance API includes the Performance Timeline API and, together, they constitute a wide range of methods that fetch useful metrics on Web page performance.

Let's dig in, shall we?

High Resolution Time API

The performance interface is a part of the High Resolution Time API.

"What is High Resolution Time?" you might ask. That's a key concept we can't overlook.

A time based on the Date is accurate to the millisecond. A high resolution time, on the other hand, is precise up to fractions of milliseconds. That's pretty darn precise, making it more ideal for yielding accurate measurements of time.

It's worth pointing out that a high resolution time measured by User Agent (UA) doesn’t change with any changes in system time because it is taken from a global, increasingly monotonic clock created by the UA. The time always increases and cannot be forced to reduce. That becomes a useful constraint for time measurement.

Every time measurement measured in the Performance API is a high resolution time. Not only does that make it a super precise way to measure performance but it's also what makes the API a part of the High Resolution Time API and why we see the two often mentioned together.

Performance Timeline API

The Performance Timeline API is an extension of the Performance API. That means that where the Performance API is part of the High Resolution Time API, the Performance Timeline API is part of the Performance API.

Or, to put it more succinctly:

High Resolution Time API └── Performance API └── Performance Timeline API

Performance Timeline API gives us access to almost all of the measurements and values we can possibly get from whole of the Performance API itself. That's a lot of information at our fingertips with a single API and why the diagram at the start of this article shows them nearly on the same plane as one another.

There are many extensions of the Performance API. Each one returns performance-related entries and all of them can be accessed and even filtered through Performance Timeline, making this a must-learn API for anyone who wants to get started with performance measurements. They are so closely related and complementary that it makes sense to be familiar with both.

The following are three methods of the Performance Timeline API that are included in the performance interface:

  • getEntries()
  • getEntriesByName()
  • getEntriesByType()

Each method returns a list of (optionally filtered) performance entries gathered from all of the other extensions of the Performance API and we'll get more acquainted with them as we go.

Another key interface included in the API is PerformanceObserver. It watches for a new entry in a given list of performance entries, and notifies of the same. Pretty handy for real-time monitoring!

The Performance Entries

The things we measure with the Performance API are referred to as "entries" and they all offer a lot of insight into Web performance.

Curious what they are? MDN has a full list that will likely get updated as new items are released, but this is what we currently have:

Entry What it Measures Parent API frame Measures frames, which represent a loop of the amount of work a browser needs to do to process things like DOM events, resizing, scrolling and CSS animations. Frame Timing API mark Creates a timestamp in the performance timeline that provides values for a name, start time and duration. User Timing API measure Similar to mark in that they are points on the timeline, but they are named for you and placed between marks. Basically, they're a midpoint between marks with no custom name value. User Timing API navigation Provides context for the load operation, such as the types of events that occur. Navigation Timing API paint Reports moments when pixels are rendered on the screen, such as the first paint, first paint with content, the start time and total duration. Paint Timing API resource Measures the latency of dependencies for rendering the screen, like images, scripts and stylesheets. This is where caching makes a difference! Resource Timing API

Let's look at a few examples that illustrate how each API looks in use. To learn more in depth about them, you can check out the specifications linked up in the table above. The Frame Timing API is still in the works.

Paint Timing API, conveniently, has already been covered thoroughly on CSS-Tricks, but here's an example of pulling the timestamp for when painting begins:

// Time when the page began to render console.log(performance.getEntriesByType('paint')[0].startTime)

The User Timing API can measure the performance for developer scripts. For example, say you have code that validates an uploaded file. We can measure how long that takes to execute:

// Time to console-print "hello" // We could also make use of "performance.measure()" to measure the time // instead of calculating the difference between the marks in the last line. performance.mark('') console.log('hello') performance.mark('') var marks = performance.getEntriesByType('mark') console.info(`Time took to say hello ${marks[1].startTime - marks[0].startTime}`)

The Navigation Timing API shows metrics for loading the current page, metrics even from when the unloading of the previous page took place. We can measure with a ton of precision for exactly how long a current page takes to load:

// Time to complete DOM content loaded event var navEntry = performance.getEntriesByType('navigation')[0] console.log(navEntry.domContentLoadedEventEnd - navEntry.domContentLoadedEventStart)

The Resource Timing API is similar to Navigation Timing API in that it measures load times, except it measures all the metrics for loading the requested resources of a current page, rather than the current page itself. For instance, we can measure how long it takes an image hosted on another server, such as a CDN, to load on the page:

// Response time of resources performance.getEntriesByType('resource').forEach((r) => { console.log(`response time for ${r.name}: ${r.responseEnd - r.responseStart}`); }); The Navigation Anomaly

Wanna hear an interesting tidbit about the Navigation Timing API?

It was conceived before the Performance Timeline API. That’s why, although you can access some navigation metrics using the Performance Timeline API (by filtering the navigation entry type), the Navigation Timing API itself has two interfaces that are directly extended from the Performance API:

  • performance.timing
  • performance.navigation

All the metrics provided by performance.navigation can be provided by navigation entries of the Performance Timeline API. As for the metrics you fetch from performance.timing, however, only some are accessible from the Performance Timeline API.

As a result, we use performance.timing to get the navigation metrics for the current page instead of using the Performance Timeline API via performance.getEntriesByType("navigation"):

// Time from start of navigation to the current page to the end of its load event addEventListener('load', () => { with(performance.timing) console.log(navigationStart - loadEventEnd); }) Let's Wrap This Up

I’d say your best bet for getting started with the Performance API is to begin by familiarizing yourself with all the performance entry types and their attributes. This will get you quickly acquainted with the end results of all the APIs—and the power this API provides for measuring performance.

As a second course of action, get to know how the Performance Timeline API probes into all those available metrics. As we covered, the two are closely related and the interplay between the two can open up interesting and helpful methods of measurement.

At that point, you can make a move toward mastering the fine art of putting the other extended APIs to use. That's where everything comes together and you finally get see the full picture of how all of these APIs, methods and entries are interconnected.

Breaking Down the Performance API is a post from CSS-Tricks

New in Chrome 63

Tue, 12/19/2017 - 21:56

Yeah, we see browser updates all the time these days and you may have already caught this one. Aside from slick new JavaScript features, there is one new CSS update in Chrome 63 that is easy to overlook but worth calling out:

Chrome 63 now supports the CSS overscroll-behavior property, making it easy to override the browser's default overflow scroll behavior.

The property is interesting because it natively supports the pull to refresh UI that we often see in native and web apps, defines scrolling regions that are handy for popovers and slide-out menus, and provides a method to control the rubber-banding effect on some touch devices so that a page does a hard stop at the top and bottom of the viewport.

For now, overscroll-behavior is not a W3C standard (here's the WICG proposed draft). It's currently only supported by Chrome (63, of course) which also means it's in Opera (version 50). Chrome Platform Status reports that it is currently in development for Firefox and has public support from Edge.

Direct Link to ArticlePermalink

New in Chrome 63 is a post from CSS-Tricks

Using SVG to Create a Duotone Effect on Images

Tue, 12/19/2017 - 14:52

Anything is possible with SVG, right?!

After a year of collaborating with some great designers and experimenting to achieve some pretty cool visual effects, it is beginning to feel like it is. A quick search of "SVG" on CodePen will attest to this. From lettering, shapes, sprites, animations, and image manipulation, everything is better with the aid of SVG. So when a new visual trend hit the web last year, it was no surprise that SVG came to the rescue to allow us to implement it.

The spark of a trend

Creatives everywhere welcomed the 2016 new year with the spark of a colorizing technique popularized by Spotify’s 2015 Year in Music website (here is last year’s) which introduced bold, duotone images to their brand identity.

The Spotify 2015 Year in Music site demonstrates the duotone image technique.

This technique is a halftone reproduction of an image by superimposing one color (traditionally black) with another. In other words, the darker tone will be mapped to the shadows of the image, and the lighter tone, mapped to the highlights.

We can achieve the duotone technique in Photoshop by applying a gradient map (Layer > New Adjustment Layer > Gradient Map) of two colors over an image.

Choose the desired color combination for the gradient map A comparison of the original image (left) and when the gradient map is applied (right)

Right click (or alt + click) the adjustment layer and create a clipping mask to apply the gradient map to just the image layer directly below it instead of the applying to all layers.

It used to take finessing the <canvas> element to calculate the color mapping and paint the result to the DOM or utilize CSS blend-modes to come close to the desired color effect. Well, thanks to the potentially life-saving powers of SVG, we can create these Photoshop-like “adjustment layers” with SVG filters.

Let’s get SaVinG!

Breaking down the SVG

We are already familiar with the vectorful greatness of SVG. In addition to producing sharp, flexible, and small graphics, SVGs also support over 20 filter effects that allow us to blur, morph, and do so much more to our SVG files. For this duotone effect, we will use two filters to construct our gradient map.

feColorMatrix (optional)

The feColorMatrix effect allows us to manipulate the colors of an image based on a matrix of rbga channels. Una Kravets details color manipulation with feColorMatrix in this deep dive and it's a highly recommended read.

Depending on your image, it may be worth balancing the colors in the image by setting it to grayscale with the color matrix. You can adjust the rbga channels as you’d like for the desired grayscale effect.

<feColorMatrix type="matrix" result="grayscale" values="1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0" > </feColorMatrix> feComponentTransfer

Next is to map the two colors over the highlights and shadows of our grayscale image with the feComponentTransfer filter effect. There are specific element attributes to keep in mind for this filter.

Attribute What it Does Value to Use color-interpolation-filters (required) Specifies the color space for gradient interpolations, color animations, and alpha compositing. sRGB result (optional) Assigns a name to this filter effect and can be used/referenced by another filter primitive with the in attribute. duotone

While the result attribute is optional, I like to include it to give additional context to each filter (and as a handy note for future reference).

The feComponent filter handles the color mapping based on transfer functions of each rbga component specified as child elements of the parent feComponentTransfer: feFuncR feFuncG feFuncB feFuncA. We use these rbga functions to calculate the values of the two colors in the gradient map.

Here's an example:

The Peachy Pink gradient map in the screenshots above uses a magenta color (#bd0b91) , with values of R(189) G(11) B(145).

Divide each RGB value by 255 to get the values of the first color in the matrix. The RGB values of the second column result in #fcbb0d (gold). Similar to in our Photoshop gradient map, the first color (left to right) gets mapped to the shadows, and the second to the highlights.

<feComponentTransfer color-interpolation-filters="sRGB" result="duotone"> <feFuncR type="table" tableValues="(189/255) 0.9882352941"></feFuncR> <feFuncG type="table" tableValues="(11/255) 0.7333333333"></feFuncG> <feFuncB type="table" tableValues="(145/255) 0.05098039216"></feFuncB> <feFuncA type="table" tableValues="0 1"></feFuncA> </feComponentTransfer> Step 3: Apply the Effect with a CSS Filter

With the SVG filter complete, we can now apply it to an image by using the CSS filter property and setting the url() filter function to the ID of the SVG filter.

It's worth noting that the SVG containing the filter can just be a hidden element sitting right in your HTML. That way it loads and is availble for use, but does not render on the screen.

background-image: url('path/to/img'); filter: url(/path/to/svg/duotone-filters.svg#duotone_peachypink); filter: url(#duotone_peachypink); Browser Support

You're probably interested in how well supported this technique is, right? Well, SVG filters have good browser support.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari89310126Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.110all4.46257

That said, CSS filters are not as widely supported. That means some graceful degradation considerations will be needed.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No176*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*37*No4.4*6257

For example, Internet Explorer (IE) does not support the CSS Filter url() function, nor nor does it support CSS background-blend-modes, the next best route to achieving the duotone effect. As a result, a fallback for IE can be an absolutely positioned CSS gradient overlay on the image to mimic the filter.

In addition, I did have issues in Firefox when accessing the filter itself based on the path for the SVG filter when I initially implemented this approach on a project. Firefox seemed to work only if the filter was referenced with the full path to the SVG instead of the filter ID alone. This does not seem to be the case anymore but is worth keeping in mind.

Bringing it All Together

Here's a full example of the filter in use:

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="duotone_peachypink"> <feColorMatrix type="matrix" result="grayscale" values="1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0" > </feColorMatrix> <feComponentTransfer color-interpolation-filters="sRGB" result="duotone"> <feFuncR type="table" tableValues="0.7411764706 0.9882352941"></feFuncR> <feFuncG type="table" tableValues="0.0431372549 0.7333333333"></feFuncG> <feFuncB type="table" tableValues="0.568627451 0.05098039216"></feFuncB> <feFuncA type="table" tableValues="0 1"></feFuncA> </feComponentTransfer> </filter> </svg>

Here's the impact that has when applied to an image:

A comparison of the original image (left) with the filtered effect (right) using SVG!

See the Pen Duotone Demo by Lentie Ward (@lentilz) on CodePen.

For more examples, you can play around with more duotone filters in this pen.

Resources

The following resources are great points of reference for the techniques used in this post.

Using SVG to Create a Duotone Effect on Images is a post from CSS-Tricks

Don’t Use My Grid System (or any others)

Mon, 12/18/2017 - 19:51

This presentation by Miriam at DjangoCon US last summer is not only well done, but an insightful look at the current and future direction of CSS layout tools.

Many of us are familiar with Susy, the roll-your-own Grid system Miriam developed. We published a deep-dive on Susy a few years back to illustrate how easy it makes defining custom grid lines without the same pre-defined measures included in other CSS frameworks, like Foundation or Bootstrap. It really was (and is) a nice tool.

To watch Miriam give a talk that discourages using frameworks—even her very own—is a massive endorsement of more recent CSS developments, like Flexbox and Grid. Her talk feels even more relevant today than it was a few months ago in light of Eric Meyer's recent post on the declining complexity of CSS.

Yes, today's CSS toolkit feels more robust and the pace of development seems to have increased in recent years. But with it come new standards that replace the hacks we've grown accustomed to and, as a result, our beloved language becomes less complicated and less reliant on dependencies to make it do what we want.

Direct Link to ArticlePermalink

Don’t Use My Grid System (or any others) is a post from CSS-Tricks

Comparing Novel vs. Tried and True Image Formats

Mon, 12/18/2017 - 14:51

Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.

We're going to explore these newer file formats in this post along with an analysis of how they stack up against one another and the previous formats. We will also cover optimization techniques to improve the delivery of your images.

Why do we need new image formats at all?

Aside from image quality, the most noticeable difference between older and newer image formats is file size. New formats use algorithms that are more efficient at compressing data, so the file sizes can be much smaller. In the context of web development, smaller files mean faster load times, which translates into lower bounce rates, more traffic, and more conversions. All good things that we often preach.

As with most technological innovations, the rollout of new image formats will be gradual as browsers consider and adopt their standards. In the meantime, we as web developers will have to accommodate users with varying levels of support. Thankfully, Can I Use is already on top of that and reporting on browser support for specific image formats.

The New Stuff

As we wander into a new frontier of image file formats, we'll have lots of format choices. Here are a few candidates that are already popping up and making cases to replace the existing standard bearers.

WebP

WebP was developed by Google as an alternative to JPG and can be up to 80 percent smaller than JPEGs containing the same image.

WebP browser support is improving all the time. Opera and Chrome currently support it. Firefox announced plans to implement it. For now, Internet Explorer and Safari are the holdouts. Large companies with tons of influence like Google and Facebook are currently experimenting with the format and it already makes up about 95 percent of the images on eBay’s homepage. YouTube also uses WebP for large thumbnails.

If you’re using a CMS like WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla's own supported extension. These will not break your website for browsers that don’t support the format so long as you provide PNG or JPG fallbacks. As a result, browsers that support the newer formats will see a performance boost while others get the standard experience. Considering that browser support for WebP is growing, it's a great opportunity to save on latency.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari2312NoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNo11.1all4.2-4.362No HEIF

High efficiency image files (or HEIF) actually bear the extension HEIC (.heic), which stands for high efficiency image container, but the two acronyms are being used interchangeably. Earlier this year, Apple announced that its newest line of products will support HEIF format by default.

On top of smaller file sizes, HEIF offers more versatility than other formats since it can support both still images and image sequences. Therefore, it’s possible to store burst photos, focal stacks, exposure stacks, images captured from video and other image collections in a single file. HEIF also supports transparency, 3D, and 4K.

In addition to images, HEIF files can hold image properties, thumbnails, metadata and auxiliary data such as depth maps and audio. Image derivations can be stored as well thanks to non-destructive editing operations. That means cropping, rotations, and other alterations can be undone at any time. Imagine all of your image variations contained in a single file!

Apple is doing everything it can to make the transition as seamless as possible. For example, when users share HEIF files with apps that do not support the format, Apple will automatically convert the image to a more compatible format such as JPG.

There is no browser support for HEIF at the time of this writing.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafariNoNoNoNoNoNoMobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid FirefoxNoNoNoNoNoNo

That being said, the file format offers impressive file savings for both video and images. This is becoming increasingly important as our devices become stronger and are able to take higher quality images and videos, thus resulting in a greater need for efficient media files.

FLIF

Free Lossless Image Format (or FLIF) uses a compression algorithm that results in files that are 14-74 percent smaller than older formats without sacrificing quality (i.e. lossless). Therefore, FLIF is a great fit for any type image or animation.

The FLIF homepage claims that FLIF files are 43% percent smaller on average than typical PNG files. The graph below illustrates how FILF compares to other formats in this regard.

FLIF often winds up being the most efficient format in tests.

FLIF takes advantage of something called meta-adaptive near-zero integer arithmetic coding, or (appropriately) MANIAC. FLIF also supports progressive interlacing so that images appear whole as soon as they begin downloading, which is another feature that has shown to reduce web page bounce rates.

The potential of FLIF is very exciting, but there is no browser support at the moment nor does it look like any browsers are currently considering adding it. While creators of the format are working hard on achieving native support for popular web browsers and image editing tools, developers can access the FLIF source code and snag a polyfill solution to test it out.

The Existing Stuff

As mentioned earlier, we're likely still years away from the new formats completely taking over. In some cases, it might be better to stick with the tried and true. Let's review what formats we're talking about and discuss how they've stuck around for so long.

JPG

As the ruling standard for most digital cameras and photo sharing devices, JPG is the most frequently used image format on the internet. W3Techs reports that nearly three-quarters of all websites use JPG files. Similarly, most popular photo editing software save images as JPG files by default.

JPG is named after Joint Photographic Experts Group, the organization that developed the technology; hence why JPG is alternatively called JPEG. You may see these acronyms used interchangeably.

The format dates all the way back to 1992, and was created to facilitate lossy compression of bitmap images. Lossy compression is an irreversible process that relies on inexact approximations. The idea was to allow developers to adjust compression ratios to achieve their desired balance between file size and image quality.

The JPG format is terrific for captured photos; however, as the name implies, lossy compression comes with a reduction in image quality. Quality degrades further each time an image is edited and re-saved, which is why developers are taught to refrain from resizing images multiple times.

GIF

GIF is short for graphics interchange format. It depends on a compression algorithm called LZW, which doesn't degrade image quality. The GIF format lacks the color support of JPG and PNG, but it has stuck around nonetheless thanks to its ability to render animations by bundling multiple images into a single file. Images stored inside a GIF file can render in succession to create a short movie-like effect. GIFs can be configured to display image sequences a set number of times or loop infinitely.

Image courtesy of Giphy.com PNG

The good old portable network graphic (PNG) was originally conceptualized as the successor to the GIF format and debuted in 1996. It was designed specifically for representing images on the web. In terms of popularity, PNG is a close runner-up to JPG. W3Techs claims that 72 percent of websites use this format. Unlike JPG, PNG images are capable of lossless compression (meaning no image quality is lost).

Another advantage over JPG is that PNG supports transparency and opacity. Since large photos tend to look superior in the JPG format, the PNG format is typically used for non-complex graphics and illustrations.

Comparing the transparency support of JPG (left) and PNG (right). Ways to Improve Image Optimization and Delivery

There are a few vital things to consider when optimizing images for the web because any file format—including the new ones—can end up adding yet another layer of complexity. Images typically account for the bulk of the bytes on a web page, so image optimization is considered low-hanging fruit for improving a website's performance. The Google Dev Guide has a comprehensive article on the topic, but here is a condensed list of tips for speeding up your image delivery.

Implement Support for New Image Formats

Since newer formats like WebP aren't yet universally supported, you must configure your applications so that they serve up the appropriate resources to your users.

You must be able to detect which formats the client supports and deliver the best option. In the case of WebP, there are a few ways to do this.

Invest in a CDN

A content delivery network (CDN) accelerates the delivery of images by caching them on their network of edge servers. Therefore, when visitors come to your website, they get routed to the nearest edge server instead of the origin server. This can produce massive time savings especially if your users are far from your origin server.

We have a whole post on the topic to help understand how CDNs work and how to leverage them for your projects.

Use CSS Instead of Images

Because older browsers didn't support image shadows and rounded corners, veteran web developers are used to displaying certain elements like buttons as images. Remember the days when displaying a custom font required making images for headlines? These practices are still out in the wild, but are terribly inefficient approaches. Instead, use CSS whenever you can.

Check Your Image Cache Settings

For image files that don't change very often, you can utilize HTTP caching directives to improve load times for your regular visitors. That way, when someone visits your website for the first time, their browser will cache the image so that it doesn't have to be downloaded again on subsequent visits. This practice can also save you money by reducing bandwidth costs.

Of course, improper caching can cause problems. Adding a fingerprint, such as a timestamp, to your images can help prevent caching conflicts. Fortunately, most web development platforms do this automatically.

Resize Images for Different Devices

Figuring out how to best accommodate mobile devices with inferior screen resolutions is an ongoing process. Some developers don't even bother and simply offer the same image files for all users, but this approach wastes your bandwidth and your mobile visitors' time. Consider using srcset so that the browser determines which image size it should deliver based on the client’s size dimensions.

Image Compression Tests

It’s always interesting to see the size differences each image format provides. In the case of this article, we’re comparing lossless and lossy image formats together. Of course, that’s not common practice as many times lossy will be smaller in size than lossless as the quality of the image suffers in order to produce a smaller image size.

In any case, choosing between lossless and lossy image formats should be based on how image intensive your site is and how fast it already runs. For example, an e-commerce shop may be comfortable with a slightly degraded image in exchange for faster load times while a photographer website is likely the opposite in order to showcase talent.

To compare the sizes of each of the six image formats mentioned in this article, we began with three JPG images and converted them into each of the other formats. Here are the performance results.

As previously mentioned, the results below vary significantly due to lossless/lossy image formats. For instance, PNG and FLIF images are both lossless, therefore resulting in larger image files.

Image 1 Size Image 2 Size Image 3 Size WebP 1.8 MB 293 KB 1.6 MB HEIF 1.2 MB 342 KB 1.1 MB FLIF 7.4 MB 2.5 MB 6.6 MB JPG 3.9 MB 1.3 MB 3.5 MB GIF 6.3 MB 3.9 MB 6.7 MB PNG 13.2 MB 5 MB 12.5 MB

According to the results above, HEIF images were smaller overall than any other format. However, due to their lack of support, it currently isn’t possible to integrate the HEIF format into web applications. WebP came in at a fairly close second and does offer ways to work around the less-than-ideal amount of browser support. For users who are using Chrome or Opera, WebP images will certainly help accelerate delivery.

As for the lossless image formats, PNG is significantly larger than it's lossy JPG counterpart. However, when optimized with FLIF, savings of about 50 percent were realized. This makes FLIF a great alternative for those who require high-quality images at a smaller file size. That said FLIF currently isn’t supported by another web browsers yet, similar to HEIF.

Conclusion

The old image formats will likely still be around for many years to come, but more developers will embrace the newer formats once they realize the size-saving benefits.

Cameras, mobile devices and many gadgets, in general, are becoming more and more sophisticated meaning that the images and videos taken are of higher quality and taking up more space. New formats must be adopted to mitigate this and it looks like we have some extremely promising options to look forward to, even if it will take some time to see them officially adopted.

Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks

Is jQuery still relevant?

Sun, 12/17/2017 - 15:42

Remy Sharp:

I've been playing with BigQuery and querying HTTP Archive's dataset ... I've queried the HTTP Archive and included the top 20 [JavaScript libraries] ... jQuery accounts for a massive 83% of libraries found on the web sites.

This corroborates other research, like W3Techs:

jQuery is used by 96.2% of all the websites whose JavaScript library we know. This is 73.1% of all websites.

And BuiltWith that shows it at 88.5% of the top 1,000,000 sites they look at.

Even without considering what jQuery does, the amount of people that already know it, and the heaps of resources out there around it, yes, jQuery is still relevant. People haven't stopped teaching it either. Literally in schools, but also courses like David DeSandro's Fizzy School. Not to mention we have our own.

While the casual naysayers and average JavaScript trolls are obnoxious for dismissing it out of hand, I can see things from that perspective too. Would I start a greenfield large project with jQuery? No. Is it easy to get into trouble staying with jQuery on a large project too long? Yes. Do I secretly still feel most comfortable knocking out quick code in jQuery? Yes.

Direct Link to ArticlePermalink

Is jQuery still relevant? is a post from CSS-Tricks

When You Just Don’t Trust a Tab

Sat, 12/16/2017 - 20:32

Do we need a word for when a browser tab has sat too long and you just don't trust thing are going to work as you expect them do when you come back?
I tweeted that the other day and apparently other people had them feels.

It's that feeling where you just know your session isn't valid anymore and if you actually try to do anything that requires you to be logged in, it ain't gonna work. It's particularly uncomfortable if you were actually trying to do something and now you're unsure if it's done or saved.

As for that name... here's some good ones from the thread:

  • Schrödinger's tab
  • Crusty tab
  • Tab smell
  • Stale tab
  • Fossilized tab
  • Tab napping
  • Dead tab
  • Orphaned tab
  • Tab rot
So how do you fix it?

It's a UX issue, really. Depends on the situation. Here's some options.

Shut it all down.

Banks do this a lot. When your session expires, which they time-limit pretty aggressively, you don't just sit on the page, they log you out and send you back to a log in screen with a message.

They might warn you:

Then you're gone:

That might seem a bit much for a site with less sensitive security. But it does quite nicely solve the (let's pick one) "Dead Tab" issue. You aren't left wondering anything. It took action, logged you out, and dropped you on a page where there isn't any half-baked state.

Stay where you are, but warn about actions.

Many sites want to keep you logged in. Ideally, as long as it's secure, you'd be logged in forever until you explicitly log out. Logging in is an awkward dance that nobody particularly enjoys and keeps you away from doing what you want to do.

CodePen is in this category, I'd say. We'd rather not log you out agressively, but certainly you can get logged out either with long periods of inactivity, or you can log yourself out. Say you logged out on a different tab... that'll log you out everywhere, but at the moment we don't anything for those other tabs left often that look like you are logged in.

That's the "dead tab" issue. But we do warn you if an action happens that you can't actually do.

WordPress has a kind of awkward flow related to this. Tabs can easily become dead, and if they do, you get no warning at all. When you perform an action that you can't do, you'll get this:

That's a kind of middleman page that actually does refresh you session, so if you do "try again", it usually works. It's scary every time though. Even if it doesn't work, the biggest risk in WordPress is losing writing, but even then, autosave usually saves the day.

Here's an example on CodePen where I created a Pen when I was logged in, but logged out elsewhere, then tried to save.

I'd give us a C- here. At least you know what's going on and you don't lose any work, but, from here on out it's awkward. You'll have to log in on another tab, and probably copy and paste code elsewhere to save it, as the "dead tab" can't get un-dead unless you refresh it.

If we were gunning for an A, we'd allow you to log in on that page without refreshing somehow, and make sure any unsaved changed get saved after the successful login. And with an unsuccessful login, still make sure you get a copy of unsaved work somehow. We might call that...

Stay where you are, warn proactively.

Perhaps messaging like: "You've been logged out. You can log back in here."

To know this, the front end of your site needs to know about the log in status either periodically or in real time. For example, a server-ping every X seconds to check that status and if you've become logged out, show the message (without requiring any other action). Or perhaps a more modern websocket connection that could push the logging out messaging as it happens.

If you can wire that up to all happen on any page of the site, not require changing pages to fix it, and never lose any unsaved work, that's pretty ideal.

The truly dead tab

The worst case scenario is when the tab has died, and there is no path to recovery. It doesn't tell you it's dead, leaving the page could result in unsaved work or actions, and there is no warning or recovery steps.

Have you seen great UX for this?

This is a major issue in that it affects every single site you can log into. It's both suprising that there isn't more talk and best practices surrounding this, and that there aren't some stand-out sites that handle this particularly awesome to shout out.

Do you know of some particularly good (or bad) examples?

When You Just Don’t Trust a Tab is a post from CSS-Tricks

Creating Cue Files from Markdown

Fri, 12/15/2017 - 15:58

Pretty specific, huh? While we're going to do exactly what the title says, this post is more about a learning exercise and an example of getting help on the internet.

My podcast editor, Chris Enns, is always looking for ways to improve his work and make podcasting better. One kinda cool way to do that is to offer "chapters" so that people can jump around in a podcast to specific points.

Through TimeJump, we already offer that on the website itself. Those happen in the format of hash links like this: #t=12:18. Believe it or not, relative links like that, in the show notes, actually work in some podcatchers (podcast listening apps).

Jumping around an audio element with the TimeJump JavaScript library.

But using "chapters" is, I suppose, the more correct way of handling this. With chapters, a podcatcher can offer its own native UI for displaying and allowing the user to jump between chapters.

Even iOS 11 is now supporting them in the podcast app:

This is the Podcast app built into iOS, but all sorts of different podcatchers display chapters in their own way.

How do you add them to a podcast? I'm no expert here, but there is an native Mac app called Podcast Chapters that does this:

This is exactly what Chris Enns uses to add the chapters, which leads us to Chris' workflow. Chris writes show notes for podcasts, and does that in Markdown format. The shows he edits for (at least some of them) post the show notes on the web and the CMS's that power that use Markdown.

He'll create a Markdown list (TimeJump compatible) of what is happening in the podcast, like this:

* **[1:49](#t=1:49)** Toys from the future. * **[8:40](#t=8:40)** Talking about flip.

Another piece of the puzzle here is that the Podcast Chapters app does its thing by giving it a `.cue` file. Cue files look like this:

PERFORMER "ShopTalk Show" TITLE "Episode 273" FILE "shoptalk-273.mp3" MP3 TRACK 01 AUDIO PERFORMER "" TITLE "Toys from the future." INDEX 01 01:49:00 TRACK 02 AUDIO PERFORMER "" TITLE "Talking about flip." INDEX 01 08:40:00

That's a very specific format. It's hand-writable, sure, but it essentially has all the same data as that Markdown list, just in a different format.

There is even an online generator to help make them:

All that stuff I just explained I only understand because Chris himself explained it. This is my favorite part. He explained it by asking for help through a YouTube video that make the problem clear as day.

Chris knew exactly what he needed to make this workflow work, he just couldn't figure out one little part of it, so he asked.

To be honest, I didn't really know how to solve it either. But, maybe just maybe, I knew just a little bit more, enough to get the process started.

  1. I know how to make an interface that would do the job here: side-by-side <textarea>s for easy copy and pasting.
  2. I know JavaScript can get this done, because it can grab string values out of textareas and has plenty of string processing methods.
  3. I know it's likely to be RegEx territory.

I also know this is programming stuff at the edge of my abilities. I bet I could do it, but it might take me all day and really challenge me.

So instead, I again set the problem up for someone else to jump in and help.

I wrote a script ("a script in the screenwriting or theatre sense") to explain what I thought needed to happen. I made a Pen, and in the JavaScript area, wrote out...

/* Step 1 Break markdown in first textarea into array of lines Loop over each line Step 2 Extract value "1:49" from line Step 3 Convert value to "01:49:00" Step 4 Extract value "Toys from the future." from line Step 5 Place these two values into a template you can see in the second textarea */

Then James Padolsey jumped in an helped with the final solution:

See the Pen WIP: Creating Cuefile from Markdown by James Padolsey (@padolsey) on CodePen.

It does exactly what everyone was hoping it would do! Thanks James!

It does essentially what I laid out in my little script.

Splits on new lines and loops over the array:

markdown.split('\n').map((line, i) => {

Extract parts of the string that are best to work with:

const title = line.split('** ')[1]; const time = line.match(/\d+:\d+/)[0];

Then manipulates those bits into shape and ultimately uses template literals to craft a new string to plop back into the textarea.

I'm sure this isn't the only way, and you might balk at the fragility and perhaps awkward nature of this type of parsing. But it also solves a very real and immediate workflow issue.

Creating Cue Files from Markdown is a post from CSS-Tricks

From Local Server to Live Site

Thu, 12/14/2017 - 16:28

(This is a sponsored post.)

With the right tools and some simple software, your WordPress development workflow can be downright delightful (instead of difficult)! That's why we built Local by Flywheel, our free local development application.

Now, we've launched Local Connect, a sweet feature embedded in the app that gives you push-pull functionality with Flywheel, our WordPress hosting platform. There’s no need to mess with downloading, uploading, and exporting. Pair up these platforms to push local sites live with a few quick clicks, pull down sites for offline editing, and streamline your tools for a simplified process! Download Local for free here and get started!

Direct Link to ArticlePermalink

From Local Server to Live Site is a post from CSS-Tricks

Accessibility Testing Tools

Thu, 12/14/2017 - 16:27

There is a sentiment that accessibility isn't a checklist, meaning that if you're really trying to make a site accessible, you don't just get to check some things off a list and call it perfect. The list may be imperfect and worse, it takes the user out of the equation, so it is said.

Karl Groves once argued against this:

I’d argue that a well-documented process which includes checklist-based evaluations are better at ensuring that all users’ needs are met, not just some users.

I mention this because you might consider an automated accessibility testing tool another form of a checklist. They have rules built into them, and they test your site against that list of rules.

I'm pretty new to the idea of these things, so no expert here, but there appears to be quite a few options! Let's take a look at some of them.

aXe

The Accessibility Engine for automated testing of HTML-based user interfaces. Drop the aXe on your accessibility defects!

aXe can take a look at an HTML document and find potential accessibility problems and report them to you. For example, there are browser extensions (Firefox / Chrome) that you give you the ability to generate a report of accessibility errors on the page you're looking at.

At it's heart, it's a script, so it can be used in all kinds of ways. For example, you could load up that script in a Pen and test that Pen for accessibility.

There is a CLI so you can integrate it into build processes or testing environments or deployment flows or whatnot.

Looks like maybe intern-a11y can help script aXe for extra functionality.

Pa11y

Pa11y is your automated accessibility testing pal. It runs HTML CodeSniffer from the command line for programmatic accessibility reporting.

Pa11y is another tool along these lines. It's a script that can test a URL for accessibility issues. You can hit it with a file path or URL from the command line (pa11y http://example.com) and get a report.

As well as use it from a Node environment and configure it however needed. It's actually intentionally meant to be used only programmatically, as it's the programmatic version of HTML_CodeSniffer, the bookmarklet/visual version.

There is also a native app version called Koa11y if that makes usage easier.

Seren Davies recently wrote about a specific scenario where they picked Pa11y over aXe:

We began by investigating aXe CLI, but soon realised it wouldn’t fit our requirements. It couldn’t check pages that required a visitor to log in, so while we could test our product pages, we couldn’t test any customer account pages. Instead we moved over to Pa11y. Its beforeScript step meant we could log into the site and test pages such as the order history.

Google Accessibility Developer Tools

Google is in on the game with Accessibility Developer Tools.

Its main component is the accessibility audit: a collection of audit rules checking for common accessibility problems, and an API for running these rules in an HTML page.

It's similar to the others in that it's designed to be used different ways, like as Grunt task, from the command line, or the browser.

Addy Osmani has a11y, powered by Chrome Accessibility Tools, which appears to provide a nicer API and nicer reporting.

It seems like most of Google's website auditing weight is thrown behind Lighthouse these days though, which include accessibility tests. For example, the "Buttons Have An Accessible Name" test, but that test is actually aXe under the hood.

It's unclear to me if Lighthouse runs a complete and up-to-date aXe audit or not, and if the Accessibility Developer Tools are sort of deprecated in favor of that, or what.

Automated Accessibility Testing Tool (AATT)

PayPal is in on the game with AATT, a combination and extension of already-mentioned tools:

Browser-based accessibility testing tools and plugins require manually testing each page, one at a time. Tools that can crawl a website can only scan pages that do not require login credentials, and that are not behind a firewall. Instead of developing, testing, and using a separate accessibility test suite, you can now integrate accessibility testing into your existing automation test suite using AATT.

AATT includes HTML CodeSniffer, aXe, and Chrome developer tool with Express and PhantomJS, which runs on Node.

It spins up a server with an API you can use to test pages on other servers.

accessibilityjs

GitHub themselves recently released accessibilityjs, the tool they use themselves for accessibility testing. They use it on the client side, where when it finds an error, it applies a big huge red border and applies a click handler so you can click it to tell you what the problem is.

They scope it to these common errors:

  • ImageWithoutAltAttributeError
  • ElementWithoutLabelError
  • LinkWithoutLabelOrRoleError
  • LabelMissingControlError
  • InputMissingLabelError
  • ButtonWithoutLabelError
  • ARIAAttributeMissingError
Honorable Mentions

I'm not intentionally trying to feature or hide any particular accessibility testing tool. All this stuff is new to me. It just seemed like these were a lot of the big players. But web searching around reveals plenty more!

  • Tanaguru: "Automated accessibility (a11y) testing tool, with emphasis on reliablity and automation"
  • The A11y Machine "is an automated accessibility testing tool which crawls and tests pages of any web application to produce detailed reports."
  • tota11y: "an accessibility (a11y) visualization toolkit"

Accessibility Testing Tools is a post from CSS-Tricks

ABEM. A more useful adaptation of BEM.

Wed, 12/13/2017 - 17:58

BEM (Block Element Modifier) is a popular CSS class naming convention that makes CSS easier to maintain. This article assumes that you are already familiar with the naming convention. If not you can learn more about it at getbem.com to catch up on the basics.

The standard syntax for BEM is:

block-name__element-name--modifier-name

I'm personally a massive fan of the methodology behind the naming convention. Separating your styles into small components is far easier to maintain than having a sea of high specificity spread all throughout your stylesheet. However, there are a few problems I have with the syntax that can cause issues in production as well as cause confusion for developers. I prefer to use a slightly tweaked version of the syntax instead. I call it ABEM (Atomic Block Element Modifier):

[a/m/o]-blockName__elementName -modifierName

An Atomic Design Prefix

The a/m/o is an Atomic Design prefix. Not to be confused with Atomic CSS which is a completely different thing. Atomic design is a methodology for organizing your components that maximizes the ability to reuse code. It splits your components into three folders: atoms, molecules, and organisms. Atoms are super simple components that generally consist of just a single element (e.g. a button component). Molecules are small groups of elements and/or components (e.g. a single form field showing a label and an input field). Organisms are large complex components made up of many molecule and atom components (e.g. a full registration form).

The difficulty of using atomic design with classic BEM is that there is no indicator saying what type of component a block is. This can make it difficult to know where the code for that component is since you may have to search in 3 separate folders in order to find it. Adding the atomic prefix to the start makes it immediately obvious what folder the component is stored in.

camelCase It allows for custom grouping

Classic BEM separates each individual word within a section with a single dash. Notice that the atomic prefix in the example above is also separated from the rest of the class name by a dash. Take a look at what happens now when you add an atomic prefix to BEM classic vs camelCase:

/* classic + atomic prefix */ .o-subscribe-form__field-item {} /* camelCase + atomic prefix */ .o-subscribeForm__fieldItem {}

At a glance, the component name when reading the classic method looks like it's called "o subscribe form". The significance of the "o" is completely lost. When you apply the "o-" to the camelCase version though, it is clear that it was intentionally written to be a separate piece of information to the component name.

Now you could apply the atomic prefix to classic BEM by capitalizing the "o" like this:

/* classic + capitalized atomic prefix */ .O-subscribe-form__field-item {}

That would solve the issue of the "o" getting lost amongst the rest of the class name however it doesn't solve the core underlying issue in the classic BEM syntax. By separating the words with dashes, the dash character is no longer available for you to use as a grouping mechanism. By using camelCase, it frees you up to use the dash character for additional grouping, even if that grouping is just adding a number to the end of a class name.

Your mind will process the groupings faster

camelCase also has the added benefit of making the grouping of the class names easier to mentally process. With camelCase, every gap you see in the class name represents a grouping of some sort. In classic BEM, every gap could be either a grouping or a space between two words in the same group.

Take a look at this silhouette of a classic BEM class (plus atomic prefix) and try to figure out where the prefix, block, element and modifier sections start and end:

Ok, now try this one. It is the exact same class as the one above except this time it is using camelCase to separate each word instead of dashes:

That was much easier wasn't it? Those silhouettes are essentially what your mind sees when it is scanning through your code. Having all those extra dashes in the class name make the groupings far less clear. As you read through your code, your brain tries to process weather the gaps it encounters are new groupings or just new words. This lack of clarity causes cognitive load to weigh on your mind as you work.

classic BEM + atomic prefix camelCase BEM + atomic prefix Use multi class selectors (responsibly)

One of the golden rules in BEM is that every selector is only supposed to contain a single class. The idea is that it keeps CSS maintainable by keeping the specificity of selectors low and manageable. On the one hand, I agree that low specificity is preferable over having specificity run rampant. On the other, I strongly disagree that a strict one class per selector rule is the best thing for projects. Using some multi-class selectors in your styles can actually improve maintainability rather than diminish it.

"But it leads to higher specificity! Don't you know that specificity is inherently evil?!?"

Specificity != bad.

Uncontrolled specificity that has run wild = bad.

Having some higher specificity declarations doesn't instantly mean that your CSS is more difficult to maintain. If used in the right way, giving certain rules higher specificity can actually make CSS easier to maintain. The key to writing maintainable CSS with uneven specificity is to add specificity purposefully and not just because a list item happens to be inside a list element.

Besides, don't we actually want our modifier styles to have greater power over elements than default styles? Bending over backwards to keep modifier styles at the same specificity level as normal styles seems silly to me. When do you actually want your regular default styles to override your specifically designated modifier styles?

Separating the modifier leads to cleaner HTML

This is the biggest change to the syntax that ABEM introduces. Instead of connecting the modifier to the element class, you apply it as a separate class.

One of the things that practically everyone complains about when they first start learning BEM is how ugly it is. It is especially bad when it comes to modifiers. Take a look at this atrocity. It only has three modifiers applied to it and yet it looks like a train wreck:

B__E--M: <button class="block-name__element-name block-name__element-name--small block-name__element-name--green block-name__element-name--active"> Submit </button>

Look at all that repetition! That repetition makes it pretty difficult to read what it's actually trying to do. Now take a look at this ABEM example that has all the same modifiers as the previous example:

A-B__E -M: <button class="a-blockName__elementName -small -green -active"> Submit </button>

Much cleaner isn't it? It is far easier to see what those modifier classes are trying to say without all that repetitive gunk getting in the way.

When inspecting an element with browser DevTools, you still see the full rule in the styling panel so it retains the connection to the original component in that way:

.a-blockName__elementName.-green { background: green; color: white; }

It's not much different to the BEM equivalent

.block-name__element-name--green { background: green; color: white; } Managing state becomes easy

One large advantage that ABEM has over classic BEM is that it becomes immensely easier to manage the state of a component. Let's use a basic accordion as an example. When a section of this accordion is open, let's say that we want to apply these changes to the styling:

  • Change the background colour of the section heading
  • Display the content area
  • Make a down arrow point up

We are going to stick to the classic B__E--M syntax for this example and strictly adhere to the one class per css selector rule. This is what we end up with (note, that for the sake of brevity, this accordion is not accessible):

See the Pen Accordion 1 - Pure BEM by Daniel Tonon (@daniel-tonon) on CodePen.

The SCSS looks pretty clean but take a look at all the extra classes that we have to add to the HTML for just a single change in state!

HTML while a segment is closed using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger"> <h2 class="revealer__heading">Three</h2> <div class="revealer__icon"></div> </div> <div class="revealer__content"> Lorem ipsum dolor sit amet... </div> </div> HTML while a segment is open using BEM: <div class="revealer accordion__section"> <div class="revealer__trigger revealer__trigger--open"> <h2 class="revealer__heading">One</h2> <div class="revealer__icon revealer__icon--open"></div> </div> <div class="revealer__content revealer__content--open"> Lorem ipsum dolor sit amet... </div> </div>

Now let's take a look at what happens when we switch over to using this fancy new A-B__E -M method:

See the Pen Accordion 2 - ABEM alternative by Daniel Tonon (@daniel-tonon) on CodePen.

A single class now controls the state-specific styling for the entire component now instead of having to apply a separate class to each element individually.

HTML while a segment is open using ABEM: <div class="m-revealer o-accordion__section -open"> <div class="m-revealer__trigger"> <h2 class="m-revealer__heading">One</h2> <div class="m-revealer__icon"></div> </div> <div class="m-revealer__content"> Lorem ipsum dolor sit amet... </div> </div>

Also, take a look at how much simpler the javascript has become. I wrote the JavaScript as cleanly as I could and this was the result:

JavaScript when using pure BEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, targets: ['trigger', 'icon', 'content'], isOpen: false, }); this.gather_elements(); this.$trigger.onclick = ()=> this.toggle(); } gather_elements(){ const keys = this.targets.map(selector => `$${selector}`); const elements = this.targets.map(selector => { return this.$wrapper.querySelector(`.revealer__${selector}`); }); let elObject = {}; keys.forEach((key, i) => { elObject[key] = elements[i]; }); Object.assign(this, elObject); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.targets.forEach(target => { this[`$${target}`].classList.add(`revealer__${target}--open`); }) this.isOpen = true; } close(){ this.targets.forEach(target => { this[`$${target}`].classList.remove(`revealer__${target}--open`); }) this.isOpen = false; } } document.querySelectorAll('.revealer').forEach(el => { new revealer(el); }) JavaScript when using ABEM: class revealer { constructor(el){ Object.assign(this, { $wrapper: el, isOpen: false, }); this.$trigger = this.$wrapper.querySelector('.m-revealer__trigger'); this.$trigger.onclick = ()=> this.toggle(); } toggle(){ if (this.isOpen) { this.close(); } else { this.open(); } } open(){ this.$wrapper.classList.add(`-open`); this.isOpen = true; } close(){ this.$wrapper.classList.remove(`-open`); this.isOpen = false; } } document.querySelectorAll('.m-revealer').forEach(el => { new revealer(el); })

This was just a very simple accordion example. Think about what happens when you extrapolate this out to something like a sticky header that changes when sticky. A sticky header might need to tell 5 different components when the header is sticky. Then in each of those 5 components, 5 elements might need to react to that header being sticky. That's 25 element.classList.add("[componentName]__[elementName]--sticky") rules we would need to write in our js to strictly adhere to the BEM naming convention. What makes more sense? 25 unique classes that are added to every element that is affected, or just one -sticky class added to the header that all 5 elements in all 5 components are able to access and read easily?

The BEM "solution" is completely impractical. Applying modifier styling to large complex components ends up turning into a bit of a grey area. A grey area that causes confusion for any developers trying to strictly adhere to the BEM naming convention as closely as possible.

ABEM modifier issues

Separating the modifier isn't without its flaws. However, there are some simple ways to work around those flaws.

Issue 1: Nesting

So we have our accordion and it's all working perfectly. Later down the line, the client wants to nest a second accordion inside the first one. So you go ahead and do that... this happens:

See the Pen Accordion 3 - ABEM nesting bug by Daniel Tonon (@daniel-tonon) on CodePen.

Nesting a second accordion inside the first one causes a rather problematic bug. Opening the parent accordion also applies the open state styling to all of the child accordions in that segment.

This is something that you obviously don't want to happen. There is a good way to avoid this though.

To explain it, let's play a little game. Assuming that both of these CSS rules are active on the same element, what color do you think that element's background would be?

.-green > * > * > * > * > * > .element { background: green; } .element.-blue { background: blue; }

If you said green due to the first rule having a higher specificity than the second rule, you would actually be wrong. Its background would be blue.

Fun fact: * is the lowest specificity selector in CSS. It basically means "anything" in CSS. It actually has no specificy, meaning it doesn't add any specificity to a selector you add it to. That means that even if you used a rule that consisted of a single class and 5 stars (.element > * > * > * > * > *) it could still be easily overwritten by just a single class on the next line of CSS!

We can take advantage of this little CSS quirk to create a more targeted approach to the accordion SCSS code. This will allow us to safely nest our accordions.

See the Pen Accordion 4 - ABEM nesting bug fix by Daniel Tonon (@daniel-tonon) on CodePen.

By using the .-modifierName > * > & pattern, you can target direct descendants that are multiple levels deep without causing your specificity to get out of control.

I only use this direct targeting technique as it becomes necessary though. By default, when I'm writing ABEM, I'll write it how I did in that original ABEM accordion example. The non-targeted method is generally all that is needed in most cases. The problem with the targeted approach is that adding a single wrapper around something can potentially break the whole system. The non-targeted approach doesn't suffer from this problem. It is much more lenient and prevents the styles from breaking if you ever need to alter the HTML later down the line.

Issue 2: Naming collisions

An issue that you can run into using the non-targeted modifier technique is naming collisions. Let's say that you need to create a set of tabs and each tab has an accordion in it. While writing this code, you have made both the accordion and the tabs respond to the -active class. This leads to a name collision. All accordions in the active tab will have their active styles applied. This is because all of the accordions are children of the tab container elements. It is the tab container elements that have the actual -active class applied to them. (Neither the tabs nor the accordion in the following example are accessible for the sake of brevity.)

See the Pen Accordion in tabs 1 - broken by Daniel Tonon (@daniel-tonon) on CodePen.

Now one way to resolve this conflict would be to simply change the accordion to respond to an -open class instead of an -active class. I would actually recommend that approach. For the sake of an example though, let's say that isn't an option. You could use the direct targeting technique mentioned above, but that makes your styles very brittle. Instead what you can do is add the component name to the front of the modifier like this:

.o-componentName { &__elementName { .-componentName--modifierName & { /* modifier styles go here */ } } }

The dash at the front of the name still signifies that it is a modifier class. The component name prevents namespace collisions with other components that should not be getting affected. The double dash is mainly just a nod to the classic BEM modifier syntax to double reinforce that it is a modifier class.

Here is the accordion and tabs example again but this time with the namespace fix applied:

See the Pen Accordion in tabs 2 - fixed by Daniel Tonon (@daniel-tonon) on CodePen.

I recommend not using this technique by default though mainly for the sake of keeping the HTML clean and also to prevent confusion when multiple components need to share the same modifier.

The majority of the time, a modifier class is being used to signify a change in state like in the accordion example above. When an element changes state, all child elements, no matter what component they belong to, should be able to read that state change and respond to it easily. When a modifier class is intended to affect multiple components at once, confusion can arise around what component that modifier specifically belongs to. In those cases, name-spacing the modifier does more harm than good.

ABEM modifier technique summary

So to make the best use of the ABEM modifier, use .-modifierName & or &.-modifierName syntax by default (depends on what element has the class on it)

.o-componentName { &.-modifierName { /* componentName modifier styles go here */ } &__elementName { .-modifierName & { /* elementName modifier styles go here */ } } }

Use direct targeting if nesting a component inside itself is causing an issue.

.o-componentName { &__elementName { .-nestedModifierName > * > & { /* modifier styles go here */ } } }

Use the component name in the modifier if you run into shared modifier name collisions. Only do this if you can't think of a different modifier name that still makes sense.

.o-componentName { &__elementName { .-componentName--sharedModifierName & { /* modifier styles go here */ } } } Context sensitive styles

Another issue with strictly adhering to the BEM one class per selector methodology is that it doesn't allow you to write context sensitive styles.

Context sensitive styles are basically "if this element is inside this parent, apply these styles to it".

With context sensitive styles, there is a parent component and a child component. The parent component should be the one that applies layout related styles such as margin and position to the child component (.parent .child { margin: 20px }). The child component should always by default not have any margin around the outside of the component. This allows the child components to be used in more contexts since it is the parent in charge of it's own layout rather than its children.

Just like with real parenting, the parents are the ones who should be in charge. You shouldn't let their naughty clueless children call the shots when it comes to the parents layout.

To dig further into this concept, let's pretend that we are building a fresh new website and right now we are building the subscribe form component for the site.

See the Pen Context sensitive 1 - IE unfriendly by Daniel Tonon (@daniel-tonon) on CodePen.

This is the first time we have had to put a form on this awesome new site that we are building. We want to be like all the cool kids so we used CSS grid to do the layout. We're smart though. We know that the button styling is going to be used in a lot more places throughout the site. To prepare for this, we separate the subscribe button styles into its own separate component like good little developers.

A while later we start cross-browser testing. We open up IE11 only to see this ugly thing staring us in the face:

IE11 does kind of support CSS grid but it doesn't support grid-gap or auto placement. After some cathartic swearing and wishing people would update their browsers, you adjust the styles to look more like this:

See the Pen Context sensitive 2 - what not to do by Daniel Tonon (@daniel-tonon) on CodePen.

Now it looks perfect in IE. All is right with the world. What could possibly go wrong?

A couple of hours later you are putting this button component into a different component on the site. This other component also uses css-grid to layout its children.

You write the following code:

See the Pen Context sensitive 3 - the other component by Daniel Tonon (@daniel-tonon) on CodePen.

You expect to see a layout that looks like this even in IE11:

But instead, because of the grid-column: 3; code you wrote earlier, it ends up looking like this:

Yikes! So what do we do about this grid-column: 3; CSS we wrote earlier? We need to restrict it to the parent component but how should we go about doing that?

Well the classic BEM method of dealing with this is to add a new parent component element class to the button like this:

See the Pen Context sensitive 4 - classic BEM solution by Daniel Tonon (@daniel-tonon) on CodePen.

On the surface this solution looks pretty good:

  • It keeps specificity low
  • The parent component is controlling its own layout
  • The styling isn't likely to bleed into other components we don't want it to bleed into

Everything is awesome and all is right with the world… right?

The downside of this approach is mainly due to the fact that we had to add an extra class to the button component. Since the subscribe-form__submit class doesn't exist in the base button component, it means that we need to add extra logic to whatever we are using as our templating engine for it to receive the correct styles.

I love using Pug to generate my page templates. I'll show you what I mean using Pug mixins as an example.

First, here is the original IE unfriendly code re-written in mixin format:

See the Pen Context sensitive 5 - IE unfriendly with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Now lets add that IE 11 subscribe-form__submit class to it:

See the Pen Context sensitive 6 - IE safe BEM solution with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

That wasn't so hard, so what am I complaining about? Well now let's say that we sometimes want this module to be placed inside a sidebar. When it is, we want the email input and the button to be stacked on top of one another. Remember that in order to strictly adhere to BEM, we are not allowed to use anything higher in specificity than a single class in our styles.

See the Pen Context sensitive 7 - IE safe BEM with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

That Pug code isn't looking so easy now is it? There are a few things contributing to this mess.

  1. Container queries would make this far less of a problem but they don't exist yet natively in any browser
  2. The problems around the BEM modifier syntax are rearing their ugly heads.

Now lets try doing it again but this time using context sensitive styles:

See the Pen Context sensitive 8 - IE safe Context Sensitive with mixins in sidebar by Daniel Tonon (@daniel-tonon) on CodePen.

Look at how much simpler the Pug markup has become. There is no "if this then that" logic to worry about in the pug markup. All of that parental logic is passed off to the css which is much better at understanding what elements are parents of other elements anyway.

You may have noticed that I used a selector that was three classes deep in that last example. It was used to apply 100% width to the button. Yes a three class selector is ok if you can justify it.

I didn't want 100% width to be applied to the button every time it was:

  • used at all anywhere
  • placed inside the subscribe form
  • placed inside the side-bar

I only wanted 100% width to be applied when it was both inside the subscribe form and inside the sidebar. The best way to handle that was with a three class selector.

Ok, in reality, I would more likely use an ABEM style -verticalStack modifier class on the subscribe-form element to apply the vertical stack styles or maybe even do it through element queries using EQCSS. This would mean that I could apply the vertical stack styles in more situations than just when it's in the sidebar. For the sake of an example though, I've done it as context sensitive styles.

Now that we understand context sensitive styles, let's go back to that original example I had and use some context sensitive styles to apply that troublesome grid-column: 3 rule:

See the Pen Context sensitive 9 - context sensitive method with mixins by Daniel Tonon (@daniel-tonon) on CodePen.

Context sensitive styles lead to simpler HTML and templating logic whilst still retaining the reusability of child components. BEM's one class per selector philosophy doesn't allow for this to happen though.

Since context sensitive styles are primarily concerned with layout, depending on circumstances, you should generally use them whenever you are dealing with these CSS properties:

  • Anything CSS grid related that is applied to the child element (grid-column, grid-row etc.)
  • Anything flexbox related that is applied to the child element (flex-grow, flex-shrink, align-self etc.)
  • margin values greater than 0
  • position values other than relative (along with the top, left, bottom, and right properties)
  • transform if it is used for positioning like translateY

You may also want to place these properties into context-sensitive styles but they aren't as often needed in a context sensitive way.

  • width
  • height
  • padding
  • border

To be absolutely clear though, context sensitive styles are not nesting for the sake of nesting. You need to think of them as if you were writing an if statement in JavaScript.

So for a CSS rule like this:

.parent .element { /* context sensitive styles */ }

You should think of it like you are writing this sort of logic:

if (.element in .parent) { .element { /* context sensitive styles */ } }

Also understand that writing a rule that is three levels deep like this:

.grandparent .parent .element { /* context sensitive styles */ }

Should be thought of like you are writing logic like this:

if ( (.element in .parent) && (.element in .grandparent) && (.parent in .grandparent) ) { .element { /* context sensitive styles */ } }

So by all means, write a css selector that is three levels deep if you really think you need that level of specificity. Please understand the underlying logic of the css that you are writing though. Only use a level of specificity that makes sense for the particular styling that you are trying to achieve.

And again, one more time, just to be super clear, do not nest for the sake of nesting!

Summing Up

The methodology behind the BEM naming convention is something that I wholeheartedly endorse. It allows css to be broken down into small easily manageable components rather than leaving css in an unwieldy mess of high specificity that is difficult to maintain. The official syntax for BEM has a lot to be desired though.

The official BEM syntax:

  • Doesn't support Atomic Design
  • Is unable to be extended easily
  • Takes longer for your mind to process the grouping of the class names
  • Is horribly incompetent when it comes to managing state on large components
  • Tries to encourage you to use single class selectors when double class selectors lead to easier maintainability
  • Tries to name-space everything even when namespacing causes more problems than it solves.
  • Makes HTML extremly bloated when done properly

My unofficial ABEM approach:

  • Makes working with Atomic Design easier
  • Frees up the dash character as an extra method that can be used for grouping
  • Allows your mind to process the grouping of the class names faster
  • Is excellent at handling state on any sized component no matter how many sub components it has
  • Encourages controlled specificity rather than just outright low specificity to mitigate team confusion and improve site maintainability
  • Avoids namespacing when it isn't needed
  • Keeps HTML quite clean with minimal extra classes applied to modules while still retaining all of BEM's advantages
Disclaimer

I didn't invent the -modifier (single dash before the modifier name) idea. I discovered it in 2016 from reading an article. I can't remember who originally conceptualized the idea. I'm happy to credit them if anyone knows the article.

ABEM. A more useful adaptation of BEM. is a post from CSS-Tricks

Keeping Parent Visible While Child in :focus

Tue, 12/12/2017 - 15:15

Say we have a <div>.

We only want this div to be visible when it's hovered, so:

div:hover { opacity: 1; }

We need focus styles as well, for accessibility, so:

div:hover, div:focus { opacity: 1; }

But div's can't be focused on their own, so we'll need:

<div tabindex="0"> </div>

There is content in this div. Not just text, but links as well.

<div tabindex="0"> <p>This little piggy went to market.</p> <a href="#market">Go to market</a> </div>

This is where it gets tricky.

As soon as focus moves from the div to the anchor link inside it, the div is no longer in focus, which leads to this weird and potentially confusing situation:

In this example, :hover reveals the div, including the link inside. Focusing the div also works, but as soon as you tab to move focus to the link, everything disappears. The link inside can recieve focus, but it's visually hidden because the div parent is visually hidden.

One solution here is to ensure that the div remains visible when anything inside of it is focused. New CSS has our back here:

div:hover, div:focus, div:focus-within { opacity: 1; }

GIF working

But browser support isn't great for :focus-within. If it was perfect, this is all we would need. In fact we wouldn't even need :focus because :focus-within handles that also.

But until then, we might need JavaScript to help. How you actually approach this depends, but the idea would be something like...

  1. When a element comes into focus...
  2. If the parent of that element is also focusable, make sure it is visible
  3. When the link leaves focus...
  4. Whatever you did to make sure the parent visible is reversed

There is a lot to consider here, like which elements you actually want to watch, how to make them visible, and how far up the tree you want to go.

Something like this is a very basic approach:

var link = document.querySelector(".deal-with-focus-with-javascript"); link.addEventListener("focus", function() { link.parentElement.classList.add("focus"); }); link.addEventListener("blur", function() { link.parentElement.classList.remove("focus"); });

See the Pen :focus-within helpful a11y thing by Chris Coyier (@chriscoyier) on CodePen.

Keeping Parent Visible While Child in :focus is a post from CSS-Tricks

How Would You Solve This Rendering Puzzle In React?

Mon, 12/11/2017 - 15:07

Welcome, React aficionados and amateurs like myself! I have a puzzle for you today.

Let's say that you wanted to render out a list of items in a 2 column structure. Each of these items is a separate component. For example, say we had a list of albums and we wanted to render them a full page 2 column list. Each "Album" is a React component.

Scroll rendering problem

Now assume the CSS framework that you are using requires you to render out a two column layout like this…

<div class="columns"> <div class="column"> Column 1 </div> <div class="column"> Column 2 </div> <div class="columns">

This means that in order to render out the albums correctly, you have to open a columns div tag, render two albums, then close the tag. You do this over and over until all the albums have been rendered out.

I solved it by breaking the set into chunks and rendering on every other album conditionally in a separate render function. That render function is only called for every other item.

class App extends Component { state = {albums: [] } async componentDidMount() { let data = Array.from(await GetAlbums()); this.setState({ albums: data } ); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { // use the modulus operator to determine even items return index % 2 ? this.renderAlbums(index) : ''; })} </section> ) } renderAlbums(index) { // two albums at a time - the current and previous item let albums = [this.state.albums[index - 1], this.state.albums[index]]; return ( <div className="columns" key={index}> {albums.map(album => { return ( <Album album={album} /> ); })} </div> ); } }

View Full Project

Another way to do this would be to break the albums array up into a two-dimensional array and iterate over that. The first highlighted block below splits up the array. The second is the vastly simplified rendering logic.

class App extends Component { state = {albums: []} async componentDidMount() { let data = Array.from(await GetAlbums()); // split the original array into a collection of two item sets data.forEach((item, index) => { if (index % 2) { albums.push([data[index - 1], data[index]]); } }); this.setState({ albums: albums }); } render() { return ( <section className="section"> {this.state.albums.map((album, index) => { return ( <div className="columns"> <Album album={album[0]}></Album> <Album album={album[1]}></Album> </div> ) })} </section> ) } }

View Full Project

This cleans up the JSX quite a bit, but now I'm redundantly entering the Album component, which just feels wrong.

Sarah Drasner pointed out to me that I hadn't even considered one of the more important scenarios here, and that is the unknown bottom scenario.

Unknown Bottom

Both of my solutions above assume that the results set received from the fetch is final. But what if it isn't?

What if we are streaming data from a server (ala RxJs style) and we don’t know how many times we will receive a results set, and we don't know how many items will be in a given set. That seriously complicates things and utterly destroys the proposed solutions. In fact, we could go ahead and say that neither of these solutions are ideal because they don’t scale to this use case.

I feel like the absolute simplest solution here would be to fix this in the CSS. Let the CSS worry about the layout the way God intended. I still think it’s important to look at how to do this with JSX because there are people building apps in the real world who have to deal with shenanigans like this every day. The requirements are not always what we want them to be.

How Would You Do It?

My question is just that — how would you do this? Is there a cleaner more efficient way? How can this be done so that it scales with an unknown bottom? Inquiring minds (mine specifically) would love to know.

How Would You Solve This Rendering Puzzle In React? is a post from CSS-Tricks

Evolution of img: Gif without the GIF

Sun, 12/10/2017 - 17:56

Colin Bendell writes about a new and particularly weird addition to Safari Technology Preview in this excellent post about the evolution of animated images on the web. He explains how we can now add an MP4 file directly to the source of an img tag. That would look something like this:

<img src="video.mp4"/>

The idea is that that code would render an image with a looping video inside. As Colin describes, this provides a host of performance benefits:

Animated GIFs are a hack. [...] But they have become an awesome tool for cinemagraphs, memes, and creative expression. All of this awesomeness, however, comes at a cost. Animated GIFs are terrible for web performance. They are HUGE in size, impact cellular data bills, require more CPU and memory, cause repaints, and are battery killers. Typically GIFs are 12x larger files than H.264 videos, and take 2x the energy to load and display in a browser. And we’re spending all of those resources on something that doesn’t even look very good – the GIF 256 color limitation often makes GIF files look terrible...

By enabling video content in img tags, Safari Technology Preview is paving the way for awesome Gif-like experiences, without the terrible performance and quality costs associated with GIF files. This functionality will be fantastic for users, developers, designers, and the web. Besides the enormous performance wins that this change enables, it opens up many new use cases that media and ecommerce businesses have been yearning to implement for years. Here’s hoping the other browsers will soon follow.

This seems like a weird hack but, after mulling it over for a second, I get how simple and elegant a solution this is. It also sort of means that other browsers won’t have to support WebP in the future, too.

Direct Link to ArticlePermalink

Evolution of img: Gif without the GIF is a post from CSS-Tricks

Calendar with CSS Grid

Sat, 12/09/2017 - 15:14

Here’s a nifty post by Jonathan Snook where he walks us through how to make a calendar interface with CSS Grid and there’s a lot of tricks in here that are worth digging into a little bit more, particularly where Jonathan uses grid-auto-flow: dense which will let Grid take the wheels of a design and try to fill up as much of the allotted space as possible.

As I was digging around, I found a post on Grid’s auto-placement algorithm by Ian Yates which kinda fleshes things out more succinctly. Might come in handy.

Oh, and we have an example of a Grid-based calendar in our ongoing collection of CSS Grid starter templates.

Direct Link to ArticlePermalink

Calendar with CSS Grid is a post from CSS-Tricks

An Open Source Etiquette Guidebook

Fri, 12/08/2017 - 14:52

Open source software is thriving. Large corporations are building on software that rests on open collaboration, enjoying the many benefits of significant community adoption. Free and open source software is amazing for its ability to bring together many people from all over the world, and join their efforts and skills by their interests.

That said, and because we come from so many different backgrounds, it’s worth taking a moment to reflect on how we work together. The manner in which you conduct yourself while working with others can sometimes impact whether your work is merged, whether someone works on your issue, or in some cases, why you might be blocked from participating in the repository in the future. This post was written to guide people as best as possible on how to keep these communications running smoothly. Here’s a bullet point list of etiquette in open source to help you have a more enjoyable time in the community and contribute to making it a better place.

For the Maintainer
  • Use labels like “help wanted” or “beginner friendly” to guide people to issues they can work on if they are new to the project.
  • When running benchmarks, show the authors of the framework/library/etc the code you’re going to run to benchmark on before running it. Allow them to PR (it’s ok to give a deadline). That way when your benchmark is run you know they have your approval and it’s as fair as possible. This also fixes issues like benchmarking dev instead of prod or some user errors.
  • When you ask someone for help or label an issue help wanted and someone PRs, please write a comment explaining why you are closing it if you decide not to merge. It’s disrespectful of their time otherwise, as they were following your call to action. I would even go so far as to say it would be nice to comment on any PR that you close OR merge, to explain why or say thank you, respectively.
  • Don’t close a PR from an active contributor and reimplement the same thing yourself. Just… don’t do this.
  • If a fight breaks out on an issue that gets personal, shut it down to core maintainers as soon as possible. Lock the issue and ensure to enforce the code of conduct if necessary.
  • Have a code of conduct and make its presence clear. You might consider the contributor covenant code of conduct. GitHub also now offers easy code of conduct integration with some base templates.
For the User
  • Saying thank you for the project before making an inquiry about a new feature or filing a bug is usually appreciated.
  • When opening an issue, create a small, isolated, simple, reproduction of the issue using an online code editor (like codepen or codesandbox) if possible and a GitHub repository if not. The process may help you discover the underlying issue (or realize that it’s not an issue with the project). It will also make it easier for maintainers to help you resolve the problem.
  • When opening an issue, please suggest a solution to the problem. Take a few minutes to do a little digging. This blog post has a few suggestions for how to dive into the source code a little. If you’re not sure, explain you’re unsure what to do.
  • When opening an issue, if you’re unable to resolve it yourself, please explain that. The expectation is that you resolve the issues you bring up. If someone else does it, that’s a gift they’re giving to you (so you should express the appropriate gratitude in that case).
  • Don’t file issues that say things like “is this even maintained anymore?” People who work on open source aren’t slaves, and you’re typically not paying them. A comment like this is insulting to the time they have put in, it reads as though the project is not valid anymore just because they needed a break, or were working on something else, or their dad died or they had a kid or any other myriad human reasons for not being at the beck and call of code. It’s totally ok to ask if there’s a roadmap for the future, or to decide based on past commits that it’s not maintained enough for your liking. It’s not ok to be passive aggressive to someone who created something for you for free.
  • If someone respectfully declines a PR because, though valid code, it’s not the direction they’d like to take the project, don’t keep commenting on the pull request. At that point, it might be a better idea to fork the project if you feel strongly the need for a feature.
  • When you want to submit a really large pull request to a project you’re not a core contributor on, it’s a good idea to ask via an issue if the direction you’d like to go makes sense. This also means you’re more likely to get the pull request merged because you have given them a heads up and communicated the plan. Better yet, break it into smaller pull requests so that it’s not too much to grok at one time.
  • Avoid entitlement. The maintainers of the project don’t owe you anything. When you start using the project, it becomes your responsibility to help maintain it. If you don’t like the way the project is being maintained, be respectful when you provide suggestions and offer help to improve the situation. You can always fork the project to work on on your own if you feel very strongly it's not the direction you would personally take it.
  • Before doing anything on a project, familiarize yourself with the contributor guidelines often found in a CONTRIBUTING.md file at the root of the repository. If one does not exist, file an issue to ask if you could help create one.
Final Thoughts

The overriding theme of these tips is to be polite, respectful, and kind. The value of open source to our industry is immeasurable. We can make it a better place for everyone by following some simple rules of etiquette. Remember that often maintainers of projects are working on it in their spare time. Also don’t forget that users of projects are sometimes new to the ever-growing software world. We should keep this in mind when communicating and working together. By so doing, we can make the open source community a better place.

An Open Source Etiquette Guidebook is a post from CSS-Tricks

The User Experience of Design Systems

Fri, 12/08/2017 - 00:37

Rune Madsen jotted down his notes from a talk he gave at UX Camp Copenhagen back in May all about design systems and also, well, the potential problems that can arise when building a single unifying system:

When you start a redesign process for a company, it’s very easy to briefly look at all their products (apps, websites, newsletters, etc) and first of all make fun of how bad it all looks, and then design this one single design system for everything. However, once you start diving into why those decisions were made, they often reveal local knowledge that your design system doesn’t solve. I see this so often where a new design system completely ignores for example the difference between platforms because they standardized their components to make mobile and web look the same. Mobile design is just a different thing: Buttons need to be larger, elements should float to the bottom of the screen so they are easier to reach, etc.

This is born from one of Rune's primary critiques on design systems: that they often benefit the designer over the user. Even if a company's products aren't the prettiest of all things, they were created in a way that solved for a need at the time and perhaps we can learn from that rather than assume that standardization is the only way to solve user needs. There's a difference between standardization and consistency and erring too heavily on the side of standards could have a water-down effect on UX that tosses the baby out with the bath water.

A very good read (and presentation) indeed!

Direct Link to ArticlePermalink

The User Experience of Design Systems is a post from CSS-Tricks

Pages