CSS-Tricks

Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 9 hours 49 sec ago

eBay’s Font Loading Strategy

Tue, 10/03/2017 - 13:57

Senthil Padmanabhan documents how:

  1. Both FOUT and FOIT are undesirable.
  2. The best solution to that is font-display.
  3. Since font-display isn't well supported, the path to get there is very complicated.
  4. They open sourced it.

They went with replicating font-display: optional, my favorite as well.

Direct Link to ArticlePermalink

eBay’s Font Loading Strategy is a post from CSS-Tricks

A Five Minutes Guide to Better Typography

Mon, 10/02/2017 - 21:09

Pierrick Calvez with, just maybe, a bunch of typographic advice that you've heard before. But this is presented very lovingly and humorously.

Repeating the basics with typography feels important in the same way repeating the basics of performance does. Gzip your stuff. Make your line length readable. Set far expires headers. Make sure you have hierarchy. Optimize your images. Align left.

Let's repeat this stuff until people actually do it.

Direct Link to ArticlePermalink

A Five Minutes Guide to Better Typography is a post from CSS-Tricks

Help Your Users `Save-Data`

Mon, 10/02/2017 - 13:59

The breadth and depth of knowledge to absorb in the web performance space is ridiculous. At a minimum, I'm discovering something new nearly every week. Case in point: The Save-Data header, which I discovered via a Google Developers article by Ilya Grigorik.

If you're looking for the tl;dr version of how Save-Data works, let me oblige you: If you use Chrome's Data Saver extension on your desktop device or opt into data savings on the Android version of Chrome, every request that Chrome sends to a server will contain a Save-Data header with a value of On. While this doesn't do anything for your site out of the gate, you can consider it an opportunity. Given an opportunity to operate on a header like this, what would you do?

Let me give you a few ideas!

Change your image delivery strategy

I don't know if you've noticed, but images are often the largest chunk of the total payload of any given page. So perhaps the most impactful step you can take with Save-Data is to change how images are delivered. What I settled on for my blog was to rewrite requests for high DPI images to low DPI images. When I serve image sets like this on my site, I do so using the <picture> element to serve WebP images with a JPEG or PNG fallback like so:

<picture> <source srcset="/img/george-and-susan-1x.webp 1x, /img/george-and-susan-2x.webp 2x"> <source srcset="/img/george-and-susan-1x.jpg 1x, /img/george-and-susan-2x.jpg 2x"> <img src="/img/george-and-susan-1x.jpg" alt="LET'S NOT GET CRAZY HERE" width="320" height="240"> </picture>

This solution is backed by tech that's baked into modern browsers. The <picture> element delivers the optimal image format according to a browser's capabilities, while srcset will help the browser decide what looks best for any given device's screen. Unfortunately, both <picture> and srcset lacks control over which image source to serve for users who want to save data. Nor should they! That's not the job of srcset or <picture>.

This is where Save-Data comes in handy. When users visit my site and send a Save-Data request header, I use mod_rewrite in Apache to serve low DPI images in lieu of high DPI ones:

RewriteCond %{HTTP:Save-Data} =on [NC] RewriteRule ^(.*)-2x.(png|jpe?g|webp)$ $1-1x.$2 [R=302,L]

If you're unfamiliar with mod_rewrite, the first line is a condition that checks if the Save-Data header is present and contains a value of on. The [NC] flag merely tells mod_rewrite to perform a case-insensitive match. If the condition is met, the RewriteRule looks for any requests for a PNG, JPEG or WebP asset ending in -2x (the high DPI version), and redirects such requests to an asset ending in -1x (the low DPI version). The R=302 flag signals to the browser that this is a temporary (302) redirect. Since the user can toggle Data Saver on or off at will, we don't want to send a permanent (301) redirect, as the browser will cache it. Caching the redirect ensures that it will be in effect even if Data Saver is turned off later on.

Image delivery with Save-Data turned on

Aren't redirects bad for performance? Most of the time, yes. Redirects add to latency because it takes longer for a request to be resolved. In this instance, however, it's a fitting solution. We don't want to rewrite requests without redirecting them because then low-quality images get cached under the same URLs as high-quality ones. What if the user connects to a wifi network, turns off Data Saver, and return to the site at a later time? The low-quality image will be served from the cache, even when Data Saver is turned off. That's not at all what we want. If the user isn't using Data Saver, we should assume the default image delivery strategy.

Of course, there are other ways you could change your image delivery in the presence of Save-Data. For example, Cory Dowdy wrote a post detailing how he uses Save-Data to serve lower quality images. Save-Data gives you lots of room to devise an image delivery strategy that makes the best sense for your site or application.

Opt out of server pushes

Server push is good stuff if you're on HTTP/2 and can use it. It allows you send assets to the client before they ever know they need them. The problem with it, though, is it can be weirdly unpredictable in select scenarios. On my site, I use it to push CSS only, and this generally works well. Although, I do tailor my push strategy in Apache to avoid browsers that have issues with it (i.e., Safari) like so:

<If "%{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i"> Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style; nopush" </If> <Else> Header add Link "</css/global.5aa545cb.css>; rel=preload; as=style" </Else>

In this instance, I'm saying "Hey, I want you to preemptively push my site's CSS to people, but only if they're not using Safari." Even if Safari users come by, they'll still get a preload resource hint (albeit with a nopush attribute to discourage my server from pushing anything).

Even so, it would behoove us to be extra cautious when it comes to pushing assets for users with Data Saver turned on. In the case of my blog, I decided I wouldn't push anything to anyone who had Data Saver enabled. To accomplish this, I made the following change to the initial <If> header:

<If "%{HTTP:Save-Data} == 'on' || %{HTTP_USER_AGENT} =~ /^(?=.*safari)(?!.*chrome).*/i">

This is the same as my initial configuration, but with an additional condition that says "Hey, if Save-Data is present and set to a value of on, don't be pushing that stylesheet. Just preload it instead." It might not be a big change, but it's one of those little things that could help visitors to avoid wasted data if a push was in vain for any reason.

Change how you deliver markup

With Save-Data, you could elect to change what parts of the document you deliver. This presents all sorts of opportunities, but before you can embark on this errand, you'll need to check for the Save-Data header in a back end language. In PHP, such a check might look something like this:

$saveData = (isset($_SERVER["HTTP_SAVE_DATA"]) && stristr($_SERVER["HTTP_SAVE_DATA"], "on") !== false) ? true : false;

On my blog, I use this $saveData boolean in various places to remove markup for images that aren't crucial to the content of a given page. For example, one of my articles has some animated GIFs and other humorous images that are funny little flourishes for users who don't mind them. But they are heavy, and not really necessary to communicate the central point of the article. I also remove the header illustration and a teeny tiny thumbnail of my book from the nav.

Image markup delivery with Data Saver on and off

From a payload perspective, this can certainly have a profound effect:

The potential effects of Data Saver on page payload

Another opportunity would be to use the aforementioned $saveData boolean to put a save-data class on the <html> element:

<html class="if($saveData === true) : echo("save-data"); endif; ?>">

With this class, I could then write styles that could change what assets are used in background-image properties, or any CSS property that references external assets. For example:

/* Just a regular ol' background image */ body { background-image: url("/images/bg.png"); } /* A lower quality background image for users with Data Saver turned on */ .save-data body { background-image: url("/images/bg-lowsrc.png"); }

Markup isn't the only thing you could modify the delivery of. You could do the same for videos, or even do something as simple as delivering fewer search results per page. Or whatever you can think of!

Conclusion

The Save-Data header gives you a great opportunity to lend a hand to users who are asking you to help them, well, save data. You could use URL rewriting to change media delivery, change how you deliver markup, or really just about anything that you could think of.

How will you help your users Save-Data? Leave a comment and let your ideas be heard!

Jeremy Wagner is the author of Web Performance in Action, available now from Manning Publications. Use promo code sswagner to save 42%.

Check him out on Twitter: @malchata

Help Your Users `Save-Data` is a post from CSS-Tricks

CSS font-variant tester

Mon, 10/02/2017 - 13:58

From small caps and ligatures to oldstyle and ordinal figures, the font-variant-* CSS properties give us much finer control over the OpenType features in our fonts. Chris Lewis has made a tool to help us see which of those font-variant features are supported in our browser of choice and I could definitely see this helpful for debugging in the future.

Direct Link to ArticlePermalink

CSS font-variant tester is a post from CSS-Tricks

Template Literals are Strictly Better Strings

Sun, 10/01/2017 - 19:24

Nicolás Bevacqua wrote this a year ago, and I'd say with a year behind us he's even more right. Template literals are always better. You don't have to screw around with concatenation. You don't have to screw around with escaping other quotes. You don't have to screw around with multiline. We should use them all the time, and adjust our linting to help us develop that muscle memory.

Besides the few things you can't use them for (e.g. JSON), there is also the matter of browser support. It's good, but no IE 11 for example. You're very likely preprocessing your JavaScript with Babel anyway, and if you're really smart, making two bundles.

Direct Link to ArticlePermalink

Template Literals are Strictly Better Strings is a post from CSS-Tricks

Turning Text into a Tweetstorm

Fri, 09/29/2017 - 20:14

With tongue firmly in cheek, I created this script to take a chunk of text and break it up into a tweetstorm, for "readability". Sort of like the opposite of something like Mercury Reader. If the irony is lost on you, it's a gentle ribbing of people who chose Twitter to publish long-form content, instead of, you know, readable paragraphs.

See the Pen Turning Text into a Tweetstorm by Chris Coyier (@chriscoyier) on CodePen.

It might be fun to look at how it works.

First, we need to bust up the text into an array of sentences.

We aren't going to do any fancy analysis of where the text is on the page, although is presumably some algorithmic way to do that. Let's just say we have:

<main id="text"> Many sentences in here. So many sentences. Probably dozens of them. </main>

Let's get our hands on that text, minus any HTML, like this:

let content = document.querySelector("#text").textContent;

Now we need to break that up into sentences. That could be as simple as splitting on periods, like content.split(". "), but that doesn't use any intelligence at all. For example, a sentence like "Where are you going, Mr. Anderson?" would be broken at the end of "Mr." and not at the "?", which ain't great.

This is find something on Stack Overflow territory!

This answer is pretty good. We'll do:

let contentArray = content.replace(​/([.?!])\s*(?=[A-Z])/g, "$1|").split("|");

I didn't bother to try and really dig into how it works, but at a glance, it looks like it deals with a few common sentence-ending punctuation types, and also those "Mr. Anderson" situations somehow.

We need some tweet templates.

There are two: the top one that kicks off the thread and reply tweets. We should literally make a template, because we'll need to loop over that reply tweet as many times as needed and that seems like way to go.

I reached for Handlebars, honestly because it's the first one I thought of. I probably could have gone for the ever-simpler Mustache, but whatever it's just a demo. I also couldda/shouldda gone with a template via Template Literals.

To make the template, the first thing I did was create a tweet with mock data in just HTML and CSS, like I was just devving out a component from scratch.

<div class="tweet"> <div class="user"> <img src="https://cdn.css-tricks.com/fake-user.svg" alt="" class="user-avatar"> <div class="user-fullname">Jimmy Fiddlecakes</div> <div class="user-username">@everythingmatters</div> </div> <div class="tweet-text"> Blah blah blah important words. 1/80 </div> <time class="tweet-time"> 5:48 PM - 15 Sep 2017 </time> yadda yadda yadda

I wrote my own HTML and CSS, but used DevTools to poke at the real Twitter design and stole hex codes and font sizes and stuff as much as I could so it looked real.

To make those tweet chunks of HTML into actual templates, I wrapped them up in script tags how Handlebars does it:

yadda yadda yadda

Now I can:

// Turn the template into a function I can call to compile it: let mainTweetSource = document.querySelector("#main-tweet-template").innerText; let mainTweetTemplate = Handlebars.compile(mainTweetSource); // Compile it whenever: let mainTweetHtml = mainTweetTemplate(data);

The data there is the useful bit. Kind of the whole point of templates.

What is "data" in a template like this? Here's stuff:

Which we can represent in an object, just like Handlebars wants:

let mainTweetData = { "avatar": "200/abott@adorable.io.png", "user-fullname": "Jimmy Fiddlecakes", "user-username": "@everythingmatters", "tweet-text": "", // from our array! "tweet-time": "5:48 PM - 15 Sep 2017", "comments": contentArray.length + 1, "retweets": Math.floor(Math.random() * 100), "loves": Math.floor(Math.random() * 200), "tweet-number": 1, "tweet-total": contentArray.length }; Now we loop over our sentences and stitch together the templates. // .shift() off the first sentence and compile the main tweet template first let mainTweetHtml = mainTweetTemplate(mainTweetData); let allSubtweetsHTML = ""; // Loop over the rest of the sentences contentArray.forEach(function(sentence, i) { let subtweet_data = { // gather up the data fresh each time, randomzing numbers and // most importantly plopping in the new sentence: "tweet-text": sentence, ... }; let subTweetHtml = subTweetTemplate(subtweetData); allSubtweetsHTML += subTweetHtml; } // Now dump out all this HTML somewhere onto the page: document.querySelector("#content").innerHTML = ` <div class="all-tweets-container"> ${mainTweetHtml} ${allSubtweets} </div> `;

That should do it!

I'm sure there are lots of ways to improve this, so feel free to fork the Pen and have at it. Ultimate points would be to make it a browser extension.

Turning Text into a Tweetstorm is a post from CSS-Tricks

CSS Grid PlayGround

Fri, 09/29/2017 - 14:13

Really great work by the Mozilla gang. Curious, as they already have MDN for CSS Grid, which isn't only a straight reference, they have plenty of "guides". Not that I'm complaining, the design and learning flow of this are fantastic. And of course, I'm a fan of the "View on CodePen" links ;)

There are always lots of ways to learn something. I'm a huge fan of Rachel Andrew's totally free video series and our own guide. This also seems a bit more playground-like.

Direct Link to ArticlePermalink

CSS Grid PlayGround is a post from CSS-Tricks

iOS 11 Safari Feature Flags

Fri, 09/29/2017 - 14:01

I was rooting around in the settings for iOS Safari the other day and stumbled upon its "Experimental Features" which act just like feature flags in any other desktop browser. This is a new feature in iOS 11 and you can find it at:

Settings > Safari > Advanced > Experimental Features

Here's what it looks like today:

Right now you can toggle on really useful things like Link Preload, CSS Spring Animations and display: contents (which Rachel Andrew wrote about a while ago). All of which could very well come in handy if you want to test your work in iOS.

iOS 11 Safari Feature Flags is a post from CSS-Tricks

​HelloSign API: The dev friendly esign

Thu, 09/28/2017 - 15:40

(This is a sponsored post.)

We know that no API can write your code for you (unfortunately), but ours comes close. With in-depth documentation, customizable features, and dashboard that makes your code easy to debug, you won't find an eSignature product with an easier path to implementation. 2x faster than other esignature APIs.

“We wanted an API built by a team that valued user experience as much as we do. At the end of the day, we chose HelloSign because it was the best combination of these features, price and user experience.”  - Max Mullen Co-Founder of Instacart

Test drive HelloSign API for free today.

Direct Link to ArticlePermalink

​HelloSign API: The dev friendly esign is a post from CSS-Tricks

Foxhound

Thu, 09/28/2017 - 15:33

As of WordPress 4.7 (December 2016), WordPress has shipped with a JSON API built right in. Wanna see? Hit up this URL right here on CSS-Tricks. There is loads of docs for it.

That JSON API can be used for all sorts of things. I think APIs are often thought about in terms of using externally, like making the data available to some other website. But it's equally interesting to think about digesting that API right on the site itself. That's how so many websites are built these days away, with "Moden JavaScript" and all.

So it's possible to build a WordPress theme that uses it's own API for all the data, making an entirely client-rendered site.

I would have thought there would be a bunch of themes like this available, but it seems it's still new enough of a concept there isn't that many. That I found, anyway. I did find Foxhound though, by Kelly Dwan. It's simple and quite nice looking:

It's React-based, so the whole thing is logically busted up into components:

I popped it up onto a test site and it works pretty good! So that I could click around and do various stuff, I imported the "theme test unit" data, which is a nice and quick way of populating a fresh WordPress install with a bunch of typical stuff (posts, authors, comments, etc) for testing purposes.

Only a shell of a page is server-rendered, it looks like. So without JavaScript at all, you get nothing. Certainly, you could make all this work the regular server-rendred WordPress way, you'd just be duplicating a heck of a lot of work, so it's not surprising that isn't done here. I would think it's more likely you'd try to server-render the React than keep the PHP stuff and React stuff in sync.

About 50% of the URL's you click load instantly, like you'd expect in an SPA type of site. Looks like any of the links generated in that shell page that PHP renders do a refresh, and links that are rendered in React components load SPA style.

I would think this would be a really strong base to start with if you were interested in building a React-powered WordPress site. That's certainly a thing these days. I just happened to be looking at the Human Made site, and they say they did just that for ustwo:

ustwo wanted to build a decoupled website with a WordPress backend and a React frontend. Human Made joined the development team to build the WordPress component, including custom post types and a custom REST API to deliver structured data for frontend display.

So ya know, people are paying for this kind of work these days.

Foxhound is a post from CSS-Tricks

How Different CMS’s Handle Content Blocks

Wed, 09/27/2017 - 15:48

Imagine a very simple blog. Blog posts are just a title and a paragraph or three. In that case, having a CMS where you enter the title and those paragraphs and hit publish is perfect. Perhaps some metadata like the date and author come along for the ride. I'm gonna stick my neck out here and say that title-and-content fields only is a CMS anti-pattern. It's powerful in its flexibility but causes long-term pain in lack of control through abstraction.

Let's not have a conversation about CMS's as a whole though, let's scope this down to just that content area issue.

Now imagine we have a site with a bit more variety. We're trying to use our CMS to build all sorts of pages. Perhaps some of it is bloggish. Some of it more like landing pages. These pages are constructed from chunks of text but also different components. Maps! Sliders! Advertising! Pull quotes!

Here are four different examples, so you can see exactly what I mean:

I bet that kind of thing looks familiar.

You can absolutely pull this off by putting all those blocks into a single content field. Hey, it's just HTML! Put the HTML you need for all these blocks right into that content field and it'll do what you want.

There's a couple of significant problems with this:

  1. Not everyone is super handy with HTML. You might be setting up a CMS for other people to use who are great with content but not so much with code. Even for those folks who are comfortable with HTML, this doesn't leverage the CMS very well. It would be a lot easier, for example, to rearrange a page by dragging and dropping then it could carefully copy and pasting HTML.
  2. The HTML-in-the-database issue. So you have five pages with an image slider. The slider requires some specific, nested, structured HTML. Are you dropping that into five different pages? That slider is bound to change, and you'll want to avoid changing it five times. Keep your HTML in templates, and data in databases.

So... what do we do? I wasn't really sure how CMS's were handling this, to be honest. So I looked around. This is what I've found.

In CraftCMS...

CraftCMS has a Matrix field type for this.

A single Matrix field can have as many types of blocks as needed, which the author can pick and choose from when adding new content. Each block type gets its own set of fields.

In Perch...

Perch handles this with what they call Blocks:

In our blog template, the body of the post is just one big area to add text. Your editor, however, might want to build up a post with images, or video, or anything else. We can give them the ability to choose between different things to put into their post using Perch Blocks.

In Statamic...

Statamic deals with this idea with Replicator meta fieldtype:

The Replicator is a meta fieldtype giving you the ability to define sets of fields that you can dynamically piece together in whatever order and arrangement you imagine.

In WordPress...

It's tempting to just shout out Advanced Custom Fields here, which seems right up this alley. I love ACF, but I don't think it's quite the same here. While you can create new custom fields to use, it's then on you to ask for that data and output it in templates in a special way. It's not a way of handling the existing content blocks.

Something like SiteOrigin Page Builder, which works by using the existing content area and widgets:

There is also the forthcoming Gutenberg editor. It's destined for WordPress core, but for now it's a plugin. Brian Jackson has a good article covering it. In the demo content of Gutenberg itself it explains it well:

The goal of this new editor is to make adding rich content to WordPress simple and enjoyable. This whole post is composed of pieces of content - somewhat similar to LEGO bricks - that you can move around an interact with. Move your cursor around and you'll notice differnet blocks light up with outlines and arrows. Press the arrows to reposition blocks quickly, whichout fearing about losing things in the process of copying and pasting.

Note the different blocks available:

In Drupal...

Drupal has a module called Paragraphs for this:

Instead of putting all their content in one WYSIWYG body field including images and videos, end-users can now choose on-the-fly between pre-defined Paragraph Types independent from one another. Paragraph Types can be anything you want from a simple text block or image to a complex and configurable slideshow.

In ModX...

ModX has a paid add-on called ContentBlocks for this:

The Modular Content principle means that you break up your content into smaller pieces of content, that can be used or parsed separately. So instead of a single blob of content, you may set a headline, an image and a text description.

Each of those small blocks of content have their own template so you can do amazing things with their values.

Of course those aren't the only CMS's on the block. How does yours handle it?

How Different CMS’s Handle Content Blocks is a post from CSS-Tricks

Lozad.js: Performant Lazy Loading of Images

Tue, 09/26/2017 - 14:00

There are a few different "traditional" ways of lazy loading of images. They all require JavaScript needing to figure out if an image is currently visible within the browser's viewport or not. Traditional approaches might be:

  • Listening to scroll and resize events on the window
  • Using a timer like setInterval

Both of these have performance problems.

Why traditional approaches are not performant?

Both of those approaches listed above are problematic because they work repeatedly and their function triggers **forced layout while calculating the position of the element with respect to the viewport, to check if the element is inside the viewport or not.

To combat these performance problems, some libraries throttle the function calls that do these things, limiting the number of times they are done.

Even then, repeated layout/reflow triggering operations consume precious time while a user interacts with the site and induces "junk" (that sluggish feeling when interacting with a site that nobody likes).

There is another approach we could use, that makes use of a new browser API designed specifically to help us with things like lazy loading: the Intersection Observer API.

That's exactly what my own library, Lozad.js, uses.

What makes Lozad.js performant?

Intersection Observers are the main ingredient. They allow registration of callback functions which get called when a monitored element enters or exits another element (or the viewport itself).

While Intersection Observers don't provide the exact pixels which overlap, they allow listening to events that allow us to watch if elements enter other elements by X% (configurable), then the callback gets fired. That is exactly our use case when using Intersection Observers for lazy loading.

Quick facts about Lozad.js
  • Light-weight: just 535 bytes minified & gzipped
  • No dependencies
  • Uses the IntersectionObserver API
  • Allows lazy loading of dynamically added elements as well (not just images), though a custom load function
Usage

Install from npm:

yarn add lozad

or via CDN:

<script src="https://cdn.jsdelivr.net/npm/lozad"></script>

In your HTML, add a class to any image you wish to lazy load. The class can be changed via configuration, but "lozad" is the default.

<img class="lozad" data-src="image.png">

Also note we've removed the src attribute of the image and replaced it with data-src. This prevents the image from being loaded before the JavaScript executes and determines it should be. It's up to you to consider the implications there. With this HTML, images won't be shown at all until JavaScript executes. Nor will they be shown in contexts like RSS or other syndication. You may want to filter your HTML to only use this markup pattern when shown on your own website, and not elsewhere.

In JavaScript, initialize Lozad library with the options:

const observer = lozad(); // lazy loads elements with default selector as ".lozad" observer.observe();

Read here about the complete list of options available in Lozad.js API.

Demo

See the Pen oGgxJr by Apoorv Saxena (@ApoorvSaxena) on CodePen.

Browser support

Browser support is limited, as the feature is relatively new. Use the official IntersectionObserver polyfill to overcome the limited support of this API.

Lozad.js: Performant Lazy Loading of Images is a post from CSS-Tricks

5 things CSS developers wish they knew before they started

Mon, 09/25/2017 - 12:54

You can learn anything, but you can't learn everything &#x1f643;

So accept that, and focus on what matters to you

— Una Kravets &#x1f469;&#x1f3fb;‍&#x1f4bb; (@Una) September 1, 2017

Una Kravets is absolutely right. In modern CSS development, there are so many things to learn. For someone starting out today, it's hard to know where to start.

Here is a list of things I wish I had known if I were to start all over again.

1. Don't underestimate CSS

It looks easy. After all, it's just a set of rules that selects an element and modifies it based on a set of properties and values.

CSS is that, but also so much more!

A successful CSS project requires the most impeccable architecture. Poorly written CSS is brittle and quickly becomes difficult to maintain. It's critical you learn how to organize your code in order to create maintainable structures with a long lifespan.

But even an excellent code base has to deal with the insane amount of devices, screen sizes, capabilities, and user preferences. Not to mention accessibility, internationalization, and browser support!

CSS is like a bear cub: cute and inoffensive but as he grows, he'll eat you alive.

  • Learn to read code before writing and delivering code.
  • It's your responsibility to stay up to date with best practice. MDN, W3C, A List Apart, and CSS-Tricks are your source of truth.
  • The web has no shape; each device is different. Embrace diversity and understand the environment we live in.
2. Share and participate

Sharing is so important! How I wish someone had told me that when I started. It took me ten years to understand the value of sharing; when I did, it completely changed how I viewed my work and how I collaborate with others.

You'll be a better developer if you surround yourself with good developers, so get involved in open source projects. The CSS community is full of kind and generous developers. The sooner the better.

Share everything you learn. The path is as important as the end result; even the tiniest things can make a difference to others.

  • Learn Git. Git is the language of open source and you definitely want to be part of it.
  • Get involved in an open source project.
  • Share! Write a blog, documentation, or tweets; speak at meetups and conferences.
  • Find an accountability partner, someone that will push you to share consistently.
3. Pick the right tools

Your code editor should be an extension of your mind.

It doesn't matter if you use Atom, VSCode or old school Vim; the better you shape your tool to your thought process, the better developer you'll become. You'll not only gain speed but also have an uninterrupted thought line that results in fluid ideas.

The terminal is your friend.

There is a lot more about being a CSS developer than actually writing CSS. Building your code, compiling, linting, formatting, and browser live refresh are only a small part of what you'll have to deal with on a daily basis.

  • Research which IDE is best for you. There are high performance text editors like Vim or easier to use options like Atom or VSCode.
  • Pick up your way around the terminal and learn CLI as soon as possible. The short book "Working the command line" is a great starting point.
4. Get to know the browser

The browser is not only your canvas, but also a powerful inspector to debug your code, test performance, and learn from others.

Learning how the browser renders your code is an eye-opening experience that will take your coding skills to the next level.

Every browser is different; get to know those differences and embrace them. Love them for what they are. (Yes, even IE.)

  • Spend time looking around the inspector.
  • You'll not be able to own every single device; get a BrowserStack or CrossBrowserTesting account, it's worth it.
  • Install every browser you can and learn how each one of them renders your code.
5. Learn to write maintainable CSS

It'll probably take you years, but if there is just one single skill a CSS developer should have, it is to write maintainable structures.

This means knowing exactly how the cascade, the box model, and specificity works. Master CSS architecture models, learn their pros and cons and how to implement them.

Remember that a modular architecture leads to independent modules, good performance, accessible structures, and responsive components (AKA: CSS happiness).

The future looks bright

Modern CSS is amazing. Its future is even better. I love CSS and enjoy every second I spend coding.

If you need help, you can reach out to me or probably any of the CSS developers mentioned in this article. You might be surprised by how kind and generous the CSS community can be.

What do you think about my advice? What other advice would you give? Let me know what you think in the comments.

5 things CSS developers wish they knew before they started is a post from CSS-Tricks

Designing Websites for iPhone X

Mon, 09/25/2017 - 12:19

We've already covered "The Notch" and the options for dealing with it from an HTML and CSS perspective. There is a bit more detail available now, straight from the horse's mouth:

Safe area insets are not a replacement for margins.

... we want to specify that our padding should be the default padding or the safe area inset, whichever is greater. This can be achieved with the brand-new CSS functions min() and max() which will be available in a future Safari Technology Preview release.

@supports(padding: max(0px)) { .post { padding-left: max(12px, constant(safe-area-inset-left)); padding-right: max(12px, constant(safe-area-inset-right)); } }

It is important to use @supports to feature-detect min and max, because they are not supported everywhere, and due to CSS’s treatment of invalid variables, to not specify a variable inside your @supports query.

Jeremey Keith's hot takes have been especially tasty, like:

You could add a bunch of proprietary CSS that Apple just pulled out of their ass.

Or you could make sure to set a background colour on your body element.

I recommend the latter.

And:

This could be a one-word article: don’t.

More specifically, don’t design websites for any specific device.

Although if this pushes support forward for min() and max() as generic functions, that's cool.

Direct Link to ArticlePermalink

Designing Websites for iPhone X is a post from CSS-Tricks

Marvin Visions

Sun, 09/24/2017 - 23:53

Marvin Visions is a new typeface designed in the spirit of those letters you’d see in scruffy old 80's sci-fi books. This specimen site has a really beautiful layout that's worth exploring and reading about the design process behind the work.

Direct Link to ArticlePermalink

Marvin Visions is a post from CSS-Tricks

The Importance Of JavaScript Abstractions When Working With Remote Data

Fri, 09/22/2017 - 16:12

Recently I had the experience of reviewing a project and accessing its scalability and maintainability. There were a few bad practices here and there, a few strange pieces of code with lack of meaningful comments. Nothing uncommon for a relatively big (legacy) codebase, right?

However was something that I keep finding. A pattern that repeated itself throughout this codebase and a number of other projects I've looked through.They could be all by lack of abstraction. Ultimately, this was the cause for maintenance difficulty.

In object-oriented programming, abstraction is one of the three central principles (along with encapsulation and inheritance). Abstraction is valuable for two key reasons:

  • Abstraction hides certain details and only show the essential features of the object. It tries to reduce and factor out details so that the developer can focus on a few concepts at a time. This approach improves understandability as well as maintainability of the code.
  • Abstraction helps us to reduce code duplication. Abstraction provides ways of dealing with crosscutting concerns and enables us to avoid tightly coupled code.

The lack of abstraction inevitably leads to problems with maintainability.

Often I've seen colleagues that want to take a step further towards more maintainable code, but they struggle to figure out and implement fundamental abstractions. Therefore, in this article, I'll share a few useful abstractions I use for the most common thing in the web world: working with remote data.

It's important to mention that, just like everything in the JavaScript world, there are tons of ways and different approaches how to implement a similar concept. I'll share my approach, but feel free to upgrade it or to tweak it based on your own needs. Or even better - improve it and share it in the comments below! ❤️

API Abstraction

I haven't had a project which doesn't use an external API to receive and send data in a while. That's usually one of the first and fundamental abstractions I define. I try to store as much API related configuration and settings there like:

  • the API base url
  • the request headers:
  • the global error handling logic const API = { /** * Simple service for generating different HTTP codes. Useful for * testing how your own scripts deal with varying responses. */ url: 'http://httpstat.us/', /** * fetch() will only reject a promise if the user is offline, * or some unlikely networking error occurs, such a DNS lookup failure. * However, there is a simple `ok` flag that indicates * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); }, /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .catch( error => { throw new Error(error) }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body, }) .then(this._handleError) .catch( error => { throw new Error(error) }); } };

In this module, we have 2 public methods, get() and post() which both return a Promise. On all places where we need to work with remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction - API.get() or API.post().

Therefore, the Fetch API is not tightly coupled with our code.

Let's say down the road we read Zell Liew's comprehensive summary of using Fetch and we realize that our error handling is not really advanced, like it could be. We want to check the content type before we process with our logic any further. No problem. We modify only our APP module, the public methods API.get() and API.post() we use everywhere else works just fine.

const API = { /* ... */ /** * Check whether the content type is correct before you process it further. */ _handleContentType(_response) { const contentType = _response.headers.get('content-type'); if (contentType && contentType.includes('application/json')) { return _response.json(); } return Promise.reject('Oops, we haven\'t got JSON!'); }, get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET', headers: new Headers({ 'Accept': 'application/json' }) }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) }, post(_endpoint, _body) { return window.fetch(this.url + _endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: _body }) .then(this._handleError) .then(this._handleContentType) .catch( error => { throw new Error(error) }) } };

Let's say we decide to switch to zlFetch, the library which Zell introduces that abstracts away the handling of the response (so you can skip ahead to and handle both your data and errors without worrying about the response). As long as our public methods return a Promise, no problem:

import zlFetch from 'zl-fetch'; const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return zlFetch(this.url + _endpoint, { method: 'GET' }) .catch( error => { throw new Error(error) }) }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return zlFetch(this.url + _endpoint, { method: 'post', body: _body }) .catch( error => { throw new Error(error) }); } };

Let's say down the road due to whatever reason we decide to switch to jQuery Ajax for working with remote data. Not a huge deal once again, as long as our public methods return a Promise. The jqXHR objects returned by $.ajax() as of jQuery 1.5 implement the Promise interface, giving them all the properties, methods, and behavior of a Promise.

const API = { /* ... */ /** * Get abstraction. * @return {Promise} */ get(_endpoint) { return $.ajax({ method: 'GET', url: this.url + _endpoint }); }, /** * Post abstraction. * @return {Promise} */ post(_endpoint, _body) { return $.ajax({ method: 'POST', url: this.url + _endpoint, data: _body }); } };

But even if jQuery's $.ajax() didn't return a Promise, you can always wrap anything in a new Promise(). All good. Maintainability++!

Now let's abstract away the receiving and storing of the data locally.

Data Repository

Let's assume we need to take the current weather. API returns us the temperature, feels-like, wind speed (m/s), pressure (hPa) and humidity (%). A common pattern, in order for the JSON response to be as slim as possible, attributes are compressed up to the first letter. So here's what we receive from the server:

{ "t": 30, "f": 32, "w": 6.7, "p": 1012, "h": 38 }

We could go ahead and use API.get('weather').t and API.get('weather').w wherever we need it, but that doesn't look semantically awesome. I'm not a fan of the one-letter-not-much-context naming.

Additionally, let's say we don't use the humidity (h) and the feels like temperature (f) anywhere. We don't need them. Actually, the server might return us a lot of other information, but we might want to use only a couple of parameters only. Not restricting what our weather module actually needs (stores) could grow to a big overhead.

Enter repository-ish pattern abstraction!

import API from './api.js'; // Import it into your code however you like const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { t, w, p } = currentWeather; return { temperature: t, windspeed: w, pressure: p }; }, /** * Get current weather. * @return {Promise} */ get(){ return API.get('/weather') .then(this._normalizeData); } }

Now throughout our codebase use WeatherRepository.get() and access meaningful attributes like .temperature and .windspeed. Better!

Additionally, via the _normalizeData() we expose only parameters we need.

There is one more big benefit. Imagine we need to wire-up our app with another weather API. Surprise, surprise, this one's response attributes names are different:

{ "temp": 30, "feels": 32, "wind": 6.7, "press": 1012, "hum": 38 }

No worries! Having our WeatherRepository abstraction all we need to tweak is the _normalizeData() method! Not a single other module (or file).

const WeatherRepository = { _normalizeData(currentWeather) { // Take only what our app needs and nothing more. const { temp, wind, press } = currentWeather; return { temperature: temp, windspeed: wind, pressure: press }; }, /* ... */ };

The attribute names of the API response object are not tightly coupled with our codebase. Maintainability++!

Down the road, say we want to display the cached weather info if the currently fetched data is not older than 15 minutes. So, we choose to use localStorage to store the weather info, instead of doing an actual network request and calling the API each time WeatherRepository.get() is referenced.

As long as WeatherRepository.get() returns a Promise, we don't need to change the implementation in any other module. All other modules which want to access the current weather don't (and shouldn't) care how the data is retrieved - if it comes from the local storage, from an API request, via Fetch API or via jQuery's $.ajax(). That's irrelevant. They only care to receive it in the "agreed" format they implemented - a Promise which wraps the actual weather data.

So, we introduce two "private" methods _isDataUpToDate() - to check if our data is older than 15 minutes or not and _storeData() to simply store out data in the browser storage.

const WeatherRepository = { /* ... */ /** * Checks weather the data is up to date or not. * @return {Boolean} */ _isDataUpToDate(_localStore) { const isDataMissing = _localStore === null || Object.keys(_localStore.data).length === 0; if (isDataMissing) { return false; } const { lastFetched } = _localStore; const outOfDateAfter = 15 * 1000; // 15 minutes const isDataUpToDate = (new Date().valueOf() - lastFetched) < outOfDateAfter; return isDataUpToDate; }, _storeData(_weather) { window.localStorage.setItem('weather', JSON.stringify({ lastFetched: new Date().valueOf(), data: _weather })); }, /** * Get current weather. * @return {Promise} */ get(){ const localData = JSON.parse( window.localStorage.getItem('weather') ); if (this._isDataUpToDate(localData)) { return new Promise(_resolve => _resolve(localData)); } return API.get('/weather') .then(this._normalizeData) .then(this._storeData); } };

Finally, we tweak the get() method: in case the weather data is up to date, we wrap it in a Promise and we return it. Otherwise - we issue an API call. Awesome!

There could be other use-cases, but I hope you got the idea. If a change requires you to tweak only one module - that's excellent! You designed the implementation in a maintainable way!

If you decide to use this repository-ish pattern, you might notice that it leads to some code and logic duplication, because all data repositories (entities) you define in your project will probably have methods like _isDataUpToDate(), _normalizeData(), _storeData() and so on...

Since I use it heavily in my projects, I decided to create a library around this pattern that does exactly what I described in this article, and more!

Introducing SuperRepo

SuperRepo is a library that helps you implement best practices for working with and storing data on the client-side.

/** * 1. Define where you want to store the data, * in this example, in the LocalStorage. * * 2. Then - define a name of your data repository, * it's used for the LocalStorage key. * * 3. Define when the data will get out of date. * * 4. Finally, define your data model, set custom attribute name * for each response item, like we did above with `_normalizeData()`. * In the example, server returns the params 't', 'w', 'p', * we map them to 'temperature', 'windspeed', and 'pressure' instead. */ const WeatherRepository = new SuperRepo({ storage: 'LOCAL_STORAGE', // [1] name: 'weather', // [2] outOfDateAfter: 5 * 60 * 1000, // 5 min // [3] request: () => API.get('weather'), // Function that returns a Promise dataModel: { // [4] temperature: 't', windspeed: 'w', pressure: 'p' } }); /** * From here on, you can use the `.getData()` method to access your data. * It will first check if out data outdated (based on the `outOfDateAfter`). * If so - it will do a server request to get fresh data, * otherwise - it will get it from the cache (Local Storage). */ WeatherRepository.getData().then( data => { // Do something awesome. console.log(`It is ${data.temperature} degrees`); });

The library does the same things we implemented before:

  • Gets data from the server (if it's missing or out of date on our side) or otherwise - gets it from the cache.
  • Just like we did with _normalizeData(), the dataModel option applies a mapping to our rough data. This means:
    • Throughout our codebase, we will access meaningful and semantic attributes like
    • .temperature and .windspeed instead of .t and .s.
    • Expose only parameters you need and simply don't include any others.
    • If the response attributes names change (or you need to wire-up another API with different response structure), you only need to tweak here - in only 1 place of your codebase.

Plus, a few additional improvements:

  • Performance: if WeatherRepository.getData() is called multiple times from different parts of our app, only 1 server request is triggered.
  • Scalability:
    • You can store the data in the localStorage, in the browser storage (if you're building a browser extension), or in a local variable (if you don't want to store data across browser sessions). See the options for the storage setting.
    • You can initiate an automatic data sync with WeatherRepository.initSyncer(). This will initiate a setInterval, which will countdown to the point when the data is out of date (based on the outOfDateAfter value) and will trigger a server request to get fresh data. Sweet.

To use SuperRepo, install (or simply download) it with NPM or Bower:

npm install --save super-repo

Then, import it into your code via one of the 3 methods available:

  • Static HTML: <script src="/node_modules/super-repo/src/index.js"></script>
  • Using ES6 Imports: // If transpiler is configured (Traceur Compiler, Babel, Rollup, Webpack) import SuperRepo from 'super-repo';
  • … or using CommonJS Imports // If module loader is configured (RequireJS, Browserify, Neuter) const SuperRepo = require('super-repo');

And finally, define your SuperRepositories :)

For advanced usage, read the documentation I wrote. Examples included!

Summary

The abstractions I described above could be one fundamental part of the architecture and software design of your app. As your experience grows, try to think about and apply similar concepts not only when working with remote data, but in other cases where they make sense, too.

When implementing a feature, always try to discuss change resilience, maintainability, and scalability with your team. Future you will thank you for that!

The Importance Of JavaScript Abstractions When Working With Remote Data is a post from CSS-Tricks

Creating a Static API from a Repository

Thu, 09/21/2017 - 14:28

When I first started building websites, the proposition was quite basic: take content, which may or may not be stored in some form of database, and deliver it to people's browsers as HTML pages. Over the years, countless products used that simple model to offer all-in-one solutions for content management and delivery on the web.

Fast-forward a decade or so and developers are presented with a very different reality. With such a vast landscape of devices consuming digital content, it's now imperative to consider how content can be delivered not only to web browsers, but also to native mobile applications, IoT devices, and other mediums yet to come.

Even within the realms of the web browser, things have also changed: client-side applications are becoming more and more ubiquitous, with challenges to content delivery that didn't exist in traditional server-rendered pages.

The answer to these challenges almost invariably involves creating an API — a way of exposing data in such a way that it can be requested and manipulated by virtually any type of system, regardless of its underlying technology stack. Content represented in a universal format like JSON is fairly easy to pass around, from a mobile app to a server, from the server to a client-side application and pretty much anything else.

Embracing this API paradigm comes with its own set of challenges. Designing, building and deploying an API is not exactly straightforward, and can actually be a daunting task to less experienced developers or to front-enders that simply want to learn how to consume an API from their React/Angular/Vue/Etc applications without getting their hands dirty with database engines, authentication or data backups.

Back to Basics

I love the simplicity of static sites and I particularly like this new era of static site generators. The idea of a website using a group of flat files as a data store is also very appealing to me, which using something like GitHub means the possibility of having a data set available as a public repository on a platform that allows anyone to easily contribute, with pull requests and issues being excellent tools for moderation and discussion.

Imagine having a site where people find a typo in an article and submit a pull request with the correction, or accepting submissions for new content with an open forum for discussion, where the community itself can filter and validate what ultimately gets published. To me, this is quite powerful.

I started toying with the idea of applying these principles to the process of building an API instead of a website — if programs like Jekyll or Hugo take a bunch of flat files and create HTML pages from them, could we build something to turn them into an API instead?

Static Data Stores

Let me show you two examples that I came across recently of GitHub repositories used as data stores, along with some thoughts on how they're structured.

The first example is the ESLint website, where every single ESLint rule is listed along with its options and associated examples of correct and incorrect code. Information for each rule is stored in a Markdown file annotated with a YAML front matter section. Storing the content in this human-friendly format makes it easy for people to author and maintain, but not very simple for other applications to consume programmatically.

The second example of a static data store is MDN's browser-compat-data, a compendium of browser compatibility information for CSS, JavaScript and other technologies. Data is stored as JSON files, which conversely to the ESLint case, are a breeze to consume programmatically but a pain for people to edit, as JSON is very strict and human errors can easily lead to malformed files.

There are also some limitations stemming from the way data is grouped together. ESLint has a file per rule, so there's no way to, say, get a list of all the rules specific to ES6, unless they chuck them all into the same file, which would be highly impractical. The same applies to the structure used by MDN.

A static site generator solves these two problems for normal websites — they take human-friendly files, like Markdown, and transform them into something tailored for other systems to consume, typically HTML. They also provide ways, through their template engines, to take the original files and group their rendered output in any way imaginable.

Similarly, the same concept applied to APIs — a static API generator? — would need to do the same, allowing developers to keep data in smaller files, using a format they're comfortable with for an easy editing process, and then process them in such a way that multiple endpoints with various levels of granularity can be created, transformed into a format like JSON.

Building a Static API Generator

Imagine an API with information about movies. Each title should have information about the runtime, budget, revenue, and popularity, and entries should be grouped by language, genre, and release year.

To represent this dataset as flat files, we could store each movie and its attributes as a text, using YAML or any other data serialization language.

budget: 170000000 website: http://marvel.com/guardians tmdbID: 118340 imdbID: tt2015381 popularity: 50.578093 revenue: 773328629 runtime: 121 tagline: All heroes start somewhere. title: Guardians of the Galaxy

To group movies, we can store the files within language, genre and release year sub-directories, as shown below.

input/ ├── english │ ├── action │ │ ├── 2014 │ │ │ └── guardians-of-the-galaxy.yaml │ │ ├── 2015 │ │ │ ├── jurassic-world.yaml │ │ │ └── mad-max-fury-road.yaml │ │ ├── 2016 │ │ │ ├── deadpool.yaml │ │ │ └── the-great-wall.yaml │ │ └── 2017 │ │ ├── ghost-in-the-shell.yaml │ │ ├── guardians-of-the-galaxy-vol-2.yaml │ │ ├── king-arthur-legend-of-the-sword.yaml │ │ ├── logan.yaml │ │ └── the-fate-of-the-furious.yaml │ └── horror │ ├── 2016 │ │ └── split.yaml │ └── 2017 │ ├── alien-covenant.yaml │ └── get-out.yaml └── portuguese └── action └── 2016 └── tropa-de-elite.yaml

Without writing a line of code, we can get something that is kind of an API (although not a very useful one) by simply serving the `input/` directory above using a web server. To get information about a movie, say, Guardians of the Galaxy, consumers would hit:

http://localhost/english/action/2014/guardians-of-the-galaxy.yaml

and get the contents of the YAML file.

Using this very crude concept as a starting point, we can build a tool — a static API generator — to process the data files in such a way that their output resembles the behavior and functionality of a typical API layer.

Format translation

The first issue with the solution above is that the format chosen to author the data files might not necessarily be the best format for the output. A human-friendly serialization format like YAML or TOML should make the authoring process easier and less error-prone, but the API consumers will probably expect something like XML or JSON.

Our static API generator can easily solve this by visiting each data file and transforming its contents to JSON, saving the result to a new file with the exact same path as the source, except for the parent directory (e.g. `output/` instead of `input/`), leaving the original untouched.

This results on a 1-to-1 mapping between source and output files. If we now served the `output/` directory, consumers could get data for Guardians of the Galaxy in JSON by hitting:

http://localhost/english/action/2014/guardians-of-the-galaxy.json

whilst still allowing editors to author files using YAML or other.

{ "budget": 170000000, "website": "http://marvel.com/guardians", "tmdbID": 118340, "imdbID": "tt2015381", "popularity": 50.578093, "revenue": 773328629, "runtime": 121, "tagline": "All heroes start somewhere.", "title": "Guardians of the Galaxy" } Aggregating data

With consumers now able to consume entries in the best-suited format, let's look at creating endpoints where data from multiple entries are grouped together. For example, imagine an endpoint that lists all movies in a particular language and of a given genre.

The static API generator can generate this by visiting all subdirectories on the level being used to aggregate entries, and recursively saving their sub-trees to files placed at the root of said subdirectories. This would generate endpoints like:

http://localhost/english/action.json

which would allow consumers to list all action movies in English, or

http://localhost/english.json

to get all English movies.

{ "results": [ { "budget": 150000000, "website": "http://www.thegreatwallmovie.com/", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "http://www.foxmovies.com/movies/deadpool", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ] }

To make things more interesting, we can also make it capable of generating an endpoint that aggregates entries from multiple diverging paths, like all movies released in a particular year. At first, it may seem like just another variation of the examples shown above, but it's not. The files corresponding to the movies released in any given year may be located at an indeterminate number of directories — for example, the movies from 2016 are located at `input/english/action/2016`, `input/english/horror/2016` and `input/portuguese/action/2016`.

We can make this possible by creating a snapshot of the data tree and manipulating it as necessary, changing the root of the tree depending on the aggregator level chosen, allowing us to have endpoints like http://localhost/2016.json.

Pagination

Just like with traditional APIs, it's important to have some control over the number of entries added to an endpoint — as our movie data grows, an endpoint listing all English movies would probably have thousands of entries, making the payload extremely large and consequently slow and expensive to transmit.

To fix that, we can define the maximum number of entries an endpoint can have, and every time the static API generator is about to write entries to a file, it divides them into batches and saves them to multiple files. If there are too many action movies in English to fit in:

http://localhost/english/action.json

we'd have

http://localhost/english/action-2.json

and so on.

For easier navigation, we can add a metadata block informing consumers of the total number of entries and pages, as well as the URL of the previous and next pages when applicable.

{ "results": [ { "budget": 150000000, "website": "http://www.thegreatwallmovie.com/", "tmdbID": 311324, "imdbID": "tt2034800", "popularity": 21.429666, "revenue": 330642775, "runtime": 103, "tagline": "1700 years to build. 5500 miles long. What were they trying to keep out?", "title": "The Great Wall" }, { "budget": 58000000, "website": "http://www.foxmovies.com/movies/deadpool", "tmdbID": 293660, "imdbID": "tt1431045", "popularity": 23.993667, "revenue": 783112979, "runtime": 108, "tagline": "Witness the beginning of a happy ending", "title": "Deadpool" } ], "metadata": { "itemsPerPage": 2, "pages": 3, "totalItems": 6, "nextPage": "/english/action-3.json", "previousPage": "/english/action.json" } } Sorting

It's useful to be able to sort entries by any of their properties, like sorting movies by popularity in descending order. This is a trivial operation that takes place at the point of aggregating entries.

Putting it all together

Having done all the specification, it was time to build the actual static API generator app. I decided to use Node.js and to publish it as an npm module so that anyone can take their data and get an API off the ground effortlessly. I called the module static-api-generator (original, right?).

To get started, create a new folder and place your data structure in a sub-directory (e.g. `input/` from earlier). Then initialize a blank project and install the dependencies.

npm init -y npm install static-api-generator --save

The next step is to load the generator module and create an API. Start a blank file called `server.js` and add the following.

const API = require('static-api-generator') const moviesApi = new API({ blueprint: 'source/:language/:genre/:year/:movie', outputPath: 'output' })

In the example above we start by defining the API blueprint, which is essentially naming the various levels so that the generator knows whether a directory represents a language or a genre just by looking at its depth. We also specify the directory where the generated files will be written to.

Next, we can start creating endpoints. For something basic, we can generate an endpoint for each movie. The following will give us endpoints like /english/action/2016/deadpool.json.

moviesApi.generate({ endpoints: ['movie'] })

We can aggregate data at any level. For example, we can generate additional endpoints for genres, like /english/action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'] })

To aggregate entries from multiple diverging paths of the same parent, like all action movies regardless of their language, we can specify a new root for the data tree. This will give us endpoints like /action.json.

moviesApi.generate({ endpoints: ['genre', 'movie'], root: 'genre' })

By default, an endpoint for a given level will include information about all its sub-levels — for example, an endpoint for a genre will include information about languages, years and movies. But we can change that behavior and specify which levels to include and which ones to bypass.

The following will generate endpoints for genres with information about languages and movies, bypassing years altogether.

moviesApi.generate({ endpoints: ['genre'], levels: ['language', 'movie'], root: 'genre' })

Finally, type npm start to generate the API and watch the files being written to the output directory. Your new API is ready to serve - enjoy!

Deployment

At this point, this API consists of a bunch of flat files on a local disk. How do we get it live? And how do we make the generation process described above part of the content management flow? Surely we can't ask editors to manually run this tool every time they want to make a change to the dataset.

GitHub Pages + Travis CI

If you're using a GitHub repository to host the data files, then GitHub Pages is a perfect contender to serve them. It works by taking all the files committed to a certain branch and making them accessible on a public URL, so if you take the API generated above and push the files to a gh-pages branch, you can access your API on http://YOUR-USERNAME.github.io/english/action/2016/deadpool.json.

We can automate the process with a CI tool, like Travis. It can listen for changes on the branch where the source files will be kept (e.g. master), run the generator script and push the new set of files to gh-pages. This means that the API will automatically pick up any change to the dataset within a matter of seconds – not bad for a static API!

After signing up to Travis and connecting the repository, go to the Settings panel and scroll down to Environment Variables. Create a new variable called GITHUB_TOKEN and insert a GitHub Personal Access Token with write access to the repository – don't worry, the token will be safe.

Finally, create a file named `.travis.yml` on the root of the repository with the following.

language: node_js node_js: - "7" script: npm start deploy: provider: pages skip_cleanup: true github_token: $GITHUB_TOKEN on: branch: master local_dir: "output"

And that's it. To see if it works, commit a new file to the master branch and watch Travis build and publish your API. Ah, GitHub Pages has full support for CORS, so consuming the API from a front-end application using Ajax requests will work like a breeze.

You can check out the demo repository for my Movies API and see some of the endpoints in action:

Going full circle with Staticman

Perhaps the most blatant consequence of using a static API is that it's inherently read-only – we can't simply set up a POST endpoint to accept data for new movies if there's no logic on the server to process it. If this is a strong requirement for your API, that's a sign that a static approach probably isn't the best choice for your project, much in the same way that choosing Jekyll or Hugo for a site with high levels of user-generated content is probably not ideal.

But if you just need some basic form of accepting user data, or you're feeling wild and want to go full throttle on this static API adventure, there's something for you. Last year, I created a project called Staticman, which tries to solve the exact problem of adding user-generated content to static sites.

It consists of a server that receives POST requests, submitted from a plain form or sent as a JSON payload via Ajax, and pushes data as flat files to a GitHub repository. For every submission, a pull request will be created for your approval (or the files will be committed directly if you disable moderation).

You can configure the fields it accepts, add validation, spam protection and also choose the format of the generated files, like JSON or YAML.

This is perfect for our static API setup, as it allows us to create a user-facing form or a basic CMS interface where new genres or movies can be added. When a form is submitted with a new entry, we'll have:

  • Staticman receives the data, writes it to a file and creates a pull request
  • As the pull request is merged, the branch with the source files (master) will be updated
  • Travis detects the update and triggers a new build of the API
  • The updated files will be pushed to the public branch (gh-pages)
  • The live API now reflects the submitted entry.
Parting thoughts

To be clear, this article does not attempt to revolutionize the way production APIs are built. More than anything, it takes the existing and ever-popular concept of statically-generated sites and translates them to the context of APIs, hopefully keeping the simplicity and robustness associated with the paradigm.

In times where APIs are such fundamental pieces of any modern digital product, I'm hoping this tool can democratize the process of designing, building and deploying them, and eliminate the entry barrier for less experienced developers.

The concept could be extended even further, introducing concepts like custom generated fields, which are automatically populated by the generator based on user-defined logic that takes into account not only the entry being created, but also the dataset as a whole – for example, imagine a rank field for movies where a numeric value is computed by comparing the popularity value of an entry against the global average.

If you decide to use this approach and have any feedback/issues to report, or even better, if you actually build something with it, I'd love to hear from you!

References

Creating a Static API from a Repository is a post from CSS-Tricks

​No Joke…Download Anything You Want on Storyblocks

Thu, 09/21/2017 - 14:27

(This is a sponsored post.)

Storyblocks is giving CSS-Tricks followers 7 days of complimentary downloads! Choose from over 400,000 stock photos, icons, vectors, backgrounds, illustrations, and more from the Storyblocks Member Library. Grab 20 downloads per day for 7 days. Also, save 60% on millions of additional Marketplace images, where artists take home 100% of sales. Everything you download is yours to keep and use forever—royalty-free. Storyblocks regularly adds new content so there’s always something fresh to see. All the stock your heart desires! Get millions of high-quality stock images for a fraction of the cost. Start your 7 days of complimentary downloads today!

Direct Link to ArticlePermalink

​No Joke…Download Anything You Want on Storyblocks is a post from CSS-Tricks

The All-New Guide to CSS Support in Email

Wed, 09/20/2017 - 20:06

Campaign Monitor has completely updated it's guide to CSS support in email. Although there was a four-year gap between updates (and this thing has been around for 10 years!), it's continued to be something I reference often when designing and developing for email.

Calling this an update is underselling the work put into this. According to the post:

The previous guide included 111 different features, whereas the new guide covers a total of 278 features.

Adding reference and testing results for 167 new features is pretty amazing. Even recent features like CSS Grid are included — and, spoiler alert, there is a smidgeon of Grid support out in the wild.

This is an entire redesign of the guide and it's well worth the time to sift through it for anyone who does any amount of email design or development. Of course, testing tools are still super important to the over email workflow, but a guide like this helps for making good design and development decisions up front that should make testing more about... well, testing, rather than discovering what is possible.

Direct Link to ArticlePermalink

The All-New Guide to CSS Support in Email is a post from CSS-Tricks

The Modlet Workflow: Improve Your Development Workflow with StealJS

Wed, 09/20/2017 - 14:59

You've been convinced of the benefits the modlet workflow provides and you want to start building your components with their own test and demo pages. Whether you're starting a new project or updating your current one, you need a module loader and bundler that doesn't require build configuration for every test and demo page you want to make.

StealJS is the answer. It can load JavaScript modules in any format (AMD, CJS, etc.) and load other file types (Less, TypeScript, etc.) with plugins. It requires minimum configuration and unlike webpack, it doesn't require a build to load your dependencies in development. Last but not least, you can use StealJS with any JavaScript library or framework, including CanJS, React, Vue, etc.

In this tutorial, we're going to add StealJS to a project, create a component with Preact, create an interactive demo page, and create a test page.

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)
1. Creating a new project

If you already have an existing Node.js project: great! You can skip to the next section where we add StealJS to your project.

If you don't already have a project, first make sure you install Node.js and update npm. Next, open your command prompt or terminal application to create a new folder and initialize a `package.json` file:

mkdir steal-tutorial cd steal-tutorial npm init -y

You'll also need a local web server to view static files in your browser. http-server is a great option if you don't already have something like Apache installed.

2. Add StealJS to your project

Next, let's install StealJS. StealJS is comprised of two main packages: steal (for module loading) and steal-tools (for module bundling). In this article, we're going to focus on steal. We're also going to use Preact to build a simple header component.

npm install steal preact --save

Next, let's create a `modlet` folder with some files:

mkdir header && cd header && touch demo.html demo.js header.js test.html test.js && cd ..

Our `header` folder has five files:

  • demo.html so we can easily demo the component in a browser
  • demo.js for the demo's JavaScript
  • header.js for the component's main JavaScript
  • test.html so we can easily test the component in a browser
  • test.js for the test's JavaScript

Our component is going to be really simple: it's going to import Preact and use it to create a functional component.

Update your `header.js` file with the following:

import { h, Component } from "preact"; export default function Header() { return ( <header> <h1>{this.props.title}</h1> </header> ); };

Our component will accept a title property and return a header element. Right now we can't see our component in action, so let's create a demo page.

3. Creating a demo page

The modlet workflow includes creating a demo page for each of your components so it's easier to see your component while you're working on it without having to load your entire application. Having a dedicated demo page also gives you the opportunity to see your component in multiple scenarios without having to view those individually throughout your app.

Let's update our `demo.html` file with the following so we can see our component in a browser:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Header Demo</title> </head> <body> <form> <label> Title <input autofocus id="title" type="text" value="Header component" /> </label> </form> <div id="container"></div> <script src="../node_modules/steal/steal.js" main="header/demo"></script> </body> </html>

There are three main parts of the body of our demo file:

  • A form with an input so we can dynamically change the title passed to the component
  • A #container for the component to be rendered into
  • A script element for loading StealJS and the demo.js file

We've added a main attribute to the script element so that StealJS knows where to start loading your JavaScript. In this case, the demo file looks for `header/demo.js`, which is going to be responsible for adding the component to the DOM and listening for the value of the input to change.

Let's update `demo.js` with the following:

import { h, render } from 'preact'; import Header from './header'; // Get references to the elements in demo.html const container = document.getElementById('container'); const titleInput = document.getElementById('title'); // Use this to render our demo component function renderComponent() { render(<Header title={titleInput.value} />, container, container.lastChild); } // Immediately render the component renderComponent(); // Listen for the input to change so we re-render the component titleInput.addEventListener('input', renderComponent);

In the demo code above, we get references to the #container and input elements so we can append the component and listen for the input's value to change. Our renderComponent function is responsible for re-rendering the component; we immediately call that function when the script runs so the component shows up on the page, and we also use that function as a listener for the input's value to change.

There's one last thing we need to do before our demo page will work: set up Babel and Preact by loading the transform-react-jsx Babel plugin. You can configure Babel with StealJS by adding this to your `package.json` (from Preact's docs):

... "steal": { "babelOptions": { "plugins": [ ["transform-react-jsx", {"pragma": "h"}] ] } }, ...

Now when we load the `demo.html` page in our browser, we see our component and a form to manipulate it:

Great! With our demo page, we can see how our component behaves with different input values. As we develop our app, we can use this demo page to see and test just this component instead of having to load our entire app to develop a single component.

4. Creating a test page

Now let's set up some testing infrastructure for our component. Our goal is to have an HTML page we can load in our browser to run just our component's tests. This makes it easier develop the component because you don't have to run the entire test suite or litter your test code with .only statements that will inevitably be forgotten and missed during code review.

We're going to use QUnit as our test runner, but you can use StealJS with Jasmine, Karma, etc. First, let's install QUnit as a dev-dependency:

npm install qunitjs --save-dev

Next, let's create our `test.html` file:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Header Test</title> </head> <body> <div id="qunit"></div> <div id="qunit-fixture"></div> <script src="../node_modules/steal/steal.js" main="header/test"></script> </body> </html>

In the HTML above, we have a couple of div elements for QUnit and a script element to load Steal and set our `test.js` file as the main entry point. If you compare this to what's on the QUnit home page, you'll notice it's very similar except we're using StealJS to load QUnit's CSS and JavaScript.

Next, let's add this to our `test.js` file:

import { h, render } from 'preact'; import Header from './header'; import QUnit from 'qunitjs'; import 'qunitjs/qunit/qunit.css'; // Use the fixture element in the HTML as a container for the component const fixtureElement = document.getElementById('qunit-fixture'); QUnit.test('hello test', function(assert) { const message = 'Welcome to your first StealJS and React app!'; // Render the component const rendered = render(<Header title={message} />, fixtureElement); // Make sure the right text is rendered assert.equal(rendered.textContent.trim(), message, 'Correct title'); }); // Start the test suite QUnit.start();

You'll notice we're using Steal to import QUnit's CSS. By default, StealJS can only load JavaScript files, but you can use plugins to load other file types! To load QUnit's CSS file, we'll install the steal-css plugin:

npm install steal-css --save-dev

Then update Steal's `package.json` configuration to use the steal-css plugin:

{ ... "steal": { "babelOptions": { "plugins": [ ["transform-react-jsx", {"pragma": "h"}] ] }, "plugins": [ "steal-css" ] }, ... }

Now we can load the test.html file in the browser:

Success! We have just the tests for that component running in our browser, and QUnit provides some additional filtering features for running specific tests. As you work on the component, you can run just that component's tests, providing you earlier feedback on whether your changes are working as expected.

Additional resources

We've successfully followed the modlet pattern by creating individual demos and test pages for our component! As we make changes to our app, we can easily test our component in different scenarios using the demo page and run just that component's tests with the test page.

With StealJS, a minimal amount of configuration was required to load our dependencies and create our individual pages, and we didn't have to run a build each time we made a change. If you're intrigued by what else it has to offer, StealJS.com has information on more advanced topics, such as building for production, progressive loading, and using Babel. You can also ask questions on Gitter or the StealJS forums!

Thank you for taking the time to go through this tutorial. Let me know what you think in the comments below!

Article Series:
  1. The Key to Building Large JavaScript Apps: The Modlet Workflow
  2. Improve Your Development Workflow with StealJS (You are here!)

The Modlet Workflow: Improve Your Development Workflow with StealJS is a post from CSS-Tricks

Pages