Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 4 hours 23 min ago

Content Security Policy: The Easy Way to Prevent Mixed Content

Tue, 11/14/2017 - 15:09

I recently learned about a browser feature where, if you provide a special HTTP header, it will automatically post to a URL with a report of any non-HTTPS content. This would be a great thing to do when transitioning a site to HTTPS, for example, to root out any mixed content warnings. In this article, we'll implement this feature via a small WordPress plugin.

What is mixed content?

"Mixed content" means you're loading a page over HTTPS page, but some of the assets on that page (images, videos, CSS, scripts, scripts called by scripts, etc) are loaded via plain HTTP.

A browser warning about mixed content.

I'm going to assume that we're all too familiar with this warning and refer the reader to this excellent primer for more background on mixed content.

What is Content Security Policy?

A Content Security Policy (CSP) is a browser feature that gives us a way to instruct the browser on how to handle mixed content errors. By including special HTTP headers in our pages, we can tell the browser to block, upgrade, or report on mixed content. This article focuses on reporting because it gives us a simple and useful entry point into CSP's in general.

CSP is an oddly opaque name. Don't let it spook you, as it's very simple to work with. It seems to have terrific support per caniuse. Here's how the outgoing report is shaped in Chrome:

{ "csp-report": { "document-uri":"http://localhost/wp/2017/03/21/godaddys-micro-dollars/", "referrer":"http://localhost/wp/", "violated-directive":"style-src", "effective-directive":"style-src", "original-policy":"default-src https: 'unsafe-inline' 'unsafe-eval'; report-uri http://localhost/wp/wp-json/csst_consecpol/v1/incidents", "disposition":"report", "blocked-uri":"http://localhost/wp/wp-includes/css/dashicons.min.css?ver=4.8.2", "status-code":200, "script-sample":"" } }

Here's what it looks like in its natural habitat:

The outgoing report in the network panel of Chrome's inspector. What do I do with this?

What you're going to have to do, is tell the browser what URL to send that report to, and then have some logic on your server to listen for it. From there, you can have it write to a log file, a database table, an email, whatever. Just be aware that you will likely generate an overwhelming amount of reports. Be very much on guard against self-DOSing!

Can I just see an example?

You may! I made a small WordPress plugin to show you. The plugin has no UI, just activate it and go. You could peel most of this out and use it in a non-WordPress environment rather directly, and this article does not assume any particular WordPress knowledge beyond activating a plugin and navigating the file system a bit. We'll spend the rest of this article digging into said plugin.

Sending the headers

Our first step will be to include our content security policy as an HTTP header. Check out this file from the plugin. It's quite short, and I think you'll be delighted to see how simple it is.

The relevant bit is this line:

header( "Content-Security-Policy-Report-Only: default-src https: 'unsafe-inline' 'unsafe-eval'; report-uri $rest_url" );

There a lot of args we can play around with there.

  • With the Content-Security-Policy-Report-Only arg, we're saying that we want a report of the assets that violate our policy, but we don't want to actually block or otherwise affect them.
  • With the default-src arg, we're saying that we're on the lookout for all types of assets, as opposed to just images or fonts or scripts, say.
  • With the https arg, we're saying that our policy is to only approve of assets that get loaded via https.
  • With the unsafe-inline and unsafe-eval args, we're saying we care about both inline resources like a normal image tag, and various methods for concocting code from strings, like JavaScripts eval() function.
  • Finally, most interestingly, with the report-uri $rest_url arg, we're giving the browser a URL to which it should send the report.

If you want more details about the args, there is an excellent doc on Mozilla.org. It's also worth noting that we could instead send our CSP as a meta tag although I find the syntax awkward and Google notes that it is not their preferred method.

This article will only utilize the HTTP header technique, and you'll notice that in my header, I'm doing some work to build the report URL. It happens to be a WP API URL. We'll dig into that next.

Registering an endpoint

You are likely familiar with the WP API. In the old days before we had the WP API, when I needed some arbitrary URL to listen for a form submission, I would often make a page, or a post of a custom post type. This was annoying and fragile because it was too easy to delete the page in wp-admin without realizing what it was for. With the WP API, we have a much more stable way to register a listener, and I do so in this class. There are three points of interest in this class.

In the first function, after checking to make sure my log is not getting too big, I make a call to register_rest_route(), which is a WordPress core function for registering a listener:

function rest_api_init() { $check_log_file_size = $this -> check_log_file_size(); if( ! $check_log_file_size ) { return FALSE; } ... register_rest_route( CSST_CONSECPOL . '/' . $rest_version, '/' . $rest_ep . '/', array( 'methods' => 'POST', 'callback' => array( $this, 'cb' ), ) ); }

That function also allows me to register a callback function, which handles the posted CSP report:

function cb( \WP_REST_Request $request ) { $body = $request -> get_body(); $body = json_decode( $body, TRUE ); $csp_report = $body['csp-report']; ... $record = new Record( $args ); $out = $record -> get_log_entry(); }

In that function, I massage the report in it's raw format, into a PHP array that my logging class will handle.

Creating a log file

In this class, I create a directory in the wp-content folder where my log file will live. I'm not a big fan of checking for stuff like this on every single page load, so notice that this function first checks to see if this is the first page load since a plugin update, before bothering to make the directory.

function make_directory() { $out = NULL; $update = new Update; if( ! $update -> get_is_update() ) { return FALSE; } $log_dir_path = $this -> meta -> get_log_dir_path(); $file_exists = file_exists( $log_dir_path ); if( $file_exists ) { return FALSE; } $out = mkdir( $log_dir_path, 0775, TRUE ); return $out; }

That update logic is in a different class and is wildly useful for lots of things, but not of special interest for this article.

Logging mixed content

Now that we have CSP reports getting posted, and we have a directory to log them to, let's look at how to actually convert a report into a log entry,

In this class I have a function for adding new records to our log file. It's interesting that much of the heavy lifting is simply a matter of providing the a arg to the fopen() function:

function add_row( $array ) { // Open for writing only; place the file pointer at the end of the file. If the file does not exist, attempt to create it. $mode = 'a'; // Open the file. $path = $this -> meta -> get_log_file_path(); $handle = fopen( $path, $mode ); // Add the row to the spreadsheet. fputcsv( $handle, $array ); // Close the file. fclose( $handle ); return TRUE; }

Nothing particular to WordPress here, just a dude adding a row to a csv in a normal-ish PHP manner. Again, if you don't care for the idea of having a log file, you could have it send an email or write to the database, or whatever seems best.


At this point we've covered all of the interesting highlights from my plugin, and I'd advice on offer a couple of pitfalls to watch out for.

First, be aware that CSP reports, like any browser feature, are subject to cross-browser differences. Look at this shockingly, painstakingly detailed report on such differences.

Second, be aware that if you have a server configuration that prevents mixed content from being requested, then the browser will never get a chance to report on it. In such a scenario, CSP reports are more useful as a way to prepare for a migration to https, rather than a way to monitor https compliance. An example of this configuration is Cloudflare's "Always Use HTTPS".

Finally, the self-DOS issue bears repeating. It's completely reasonable to assume that a popular site will rack up millions of reports per month. Therefore, rather than track the reports on your own server or database, consider outsourcing this to a service such as httpschecker.net.

Next steps

Some next steps specific to WordPress would be to add a UI for downloading the report file. You could also store the reports in the database instead of in a file. This would make it economical to, say, determine if a new record already exists before adding it as a duplicate.

More generally, I would encourage the curious reader to experiment with the many possible args for the CSP header. It's impressive that so much power is packed into such a terse syntax. It's possible to handle requests by asset type, domain, protocol — really almost any combination imaginable.

Content Security Policy: The Easy Way to Prevent Mixed Content is a post from CSS-Tricks

Robust React User Interfaces with Finite State Machines

Mon, 11/13/2017 - 16:08

User interfaces can be expressed by two things:

  1. The state of the UI
  2. Actions that can change that state

From credit card payment devices and gas pump screens to the software that your company creates, user interfaces react to the actions of the user and other sources and change their state accordingly. This concept isn't just limited to technology, it's a fundamental part of how everything works:

For every action, there is an equal and opposite reaction.

- Isaac Newton

This is a concept we can apply to developing better user interfaces, but before we go there, I want you to try something. Consider a photo gallery interface with this user interaction flow:

  1. Show a search input and a search button that allows the user to search for photos
  2. When the search button is clicked, fetch photos with the search term from Flickr
  3. Display the search results in a grid of small sized photos
  4. When a photo is clicked/tapped, show the full size photo
  5. When a full-sized photo is clicked/tapped again, go back to the gallery view

Now think about how you would develop it. Maybe even try programming it in React. I'll wait; I'm just an article. I'm not going anywhere.

Finished? Awesome! That wasn't too difficult, right? Now think about the following scenarios that you might have forgotten:

  • What if the user clicks the search button repeatedly?
  • What if the user wants to cancel the search while it's in-flight?
  • Is the search button disabled while searching?
  • What if the user mischievously enables the disabled button?
  • Is there any indication that the results are loading?
  • What happens if there's an error? Can the user retry the search?
  • What if the user searches and then clicks a photo? What should happen?

These are just some of the potential problems that can arise during planning, development, or testing. Few things are worse in software development than thinking that you've covered every possible use case, and then discovering (or receiving) new edge cases that will further complicate your code once you account for them. It's especially difficult to jump into a pre-existing project where all of these use cases are undocumented, but instead hidden in spaghetti code and left for you to decipher.

Stating the obvious

What if we could determine all possible UI states that can result from all possible actions performed on each state? And what if we can visualize these states, actions, and transitions between states? Designers intuitively do this, in what are called "user flows" (or "UX Flows"), to depict what the next state of the UI should be depending on the user interaction.

Picture credit: Simplified Checkout Process by Michael Pons

In computer science terms, there is a computational model called finite automata, or "finite state machines" (FSM), that can express the same type of information. That is, they describe which state comes next when an action is performed on the current state. Just like user flows, these finite state machines can be visualized in a clear and unambiguous way. For example, here is the state transition diagram describing the FSM of a traffic light:

What is a finite state machine?

A state machine is a useful way of modeling behavior in an application: for every action, there is a reaction in the form of a state change. There's 5 parts to a classical finite state machine:

  1. A set of states (e.g., idle, loading, success, error, etc.)
  2. A set of actions (e.g., SEARCH, CANCEL, SELECT_PHOTO, etc.)
  3. An initial state (e.g., idle)
  4. A transition function (e.g., transition('idle', 'SEARCH') == 'loading')
  5. Final states (which don't apply to this article.)

Deterministic finite state machines (which is what we'll be dealing with) have some constraints, as well:

  • There are a finite number of possible states
  • There are a finite number of possible actions (these are the "finite" parts)
  • The application can only be in one of these states at a time
  • Given a currentState and an action, the transition function must always return the same nextState (this is the "deterministic" part)
Representing finite state machines

A finite state machine can be represented as a mapping from a state to its "transitions", where each transition is an action and the nextState that follows that action. This mapping is just a plain JavaScript object.

Let's consider an American traffic light example, one of the simplest FSM examples. Assume we start on green, then transition to yellow after some TIMER, and then RED after another TIMER, and then back to green after another TIMER:

const machine = { green: { TIMER: 'yellow' }, yellow: { TIMER: 'red' }, red: { TIMER: 'green' } }; const initialState = 'green';

A transition function answers the question:

Given the current state and an action, what will the next state be?

With our setup, transitioning to the next state based on an action (in this case, TIMER) is just a look-up of the currentState and action in the machine object, since:

  • machine[currentState] gives us the next action mapping, e.g.: machine['green'] == {TIMER: 'yellow'}
  • machine[currentState][action] gives us the next state from the action, e.g.: machine['green']['TIMER'] == 'yellow': // ... function transition(currentState, action) { return machine[currentState][action]; } transition('green', 'TIMER'); // => 'yellow'

Instead of using if/else or switch statements to determine the next state, e.g., if (currentState === 'green') return 'yellow';, we moved all of that logic into a plain JavaScript object that can be serialized into JSON. That's a strategy that will pay off greatly in terms of testing, visualization, reuse, analysis, flexibility, and configurability.

See the Pen Simple finite state machine example by David Khourshid (@davidkpiano) on CodePen.

Finite State Machines in React

Taking a look at a more complicated example, let's see how we can represent our gallery app using a finite state machine. The app can be in one of several states:

  • start - the initial search page view
  • loading - search results fetching view
  • error - search failed view
  • gallery - successful search results view
  • photo - detailed single photo view

And several actions can be performed, either by the user or the app itself:

  • SEARCH - user clicks the "search" button
  • SEARCH_SUCCESS - search succeeded with the queried photos
  • SEARCH_FAILURE - search failed due to an error
  • CANCEL_SEARCH - user clicks the "cancel search" button
  • SELECT_PHOTO - user clicks a photo in the gallery
  • EXIT_PHOTO - user clicks to exit the detailed photo view

The best way to visualize how these states and actions come together, at first, is with two very powerful tools: pencil and paper. Draw arrows between the states, and label the arrows with actions that cause transitions between the states:

We can now represent these transitions in an object, just like in the traffic light example:

const galleryMachine = { start: { SEARCH: 'loading' }, loading: { SEARCH_SUCCESS: 'gallery', SEARCH_FAILURE: 'error', CANCEL_SEARCH: 'gallery' }, error: { SEARCH: 'loading' }, gallery: { SEARCH: 'loading', SELECT_PHOTO: 'photo' }, photo: { EXIT_PHOTO: 'gallery' } }; const initialState = 'start';

Now let's see how we can incorporate this finite state machine configuration and the transition function into our gallery app. In the App's component state, there will be a single property that will indicate the current finite state, gallery:

class App extends React.Component { constructor(props) { super(props); this.state = { gallery: 'start', // initial finite state query: '', items: [] }; } // ...

The transition function will be a method of this App class, so that we can retrieve the current finite state:

// ... transition(action) { const currentGalleryState = this.state.gallery; const nextGalleryState = galleryMachine[currentGalleryState][action.type]; if (nextGalleryState) { const nextState = this.command(nextGalleryState, action); this.setState({ gallery: nextGalleryState, ...nextState // extended state }); } } // ...

This looks similar to the previously described transition(currentState, action) function, with a few differences:

  • The action is an object with a type property that specifies the string action type, e.g., type: 'SEARCH'
  • Only the action is passed in since we can retrieve the current finite state from this.state.gallery
  • The entire app state will be updated with the next finite state, i.e., nextGalleryState, as well as any extended state (nextState) that results from executing a command based on the next state and action payload (see the "Executing commands" section)
Executing commands

When a state change occurs, "side effects" (or "commands" as we'll refer to them) might be executed. For example, when a user clicks the "Search" button and a 'SEARCH' action is emitted, the state will transition to 'loading', and an async Flickr search should be executed (otherwise, 'loading' would be a lie, and developers should never lie).

We can handle these side effects in a command(nextState, action) method that determines what to execute given the next finite state and action payload, as well as what the extended state should be:

// ... command(nextState, action) { switch (nextState) { case 'loading': // execute the search command this.search(action.query); break; case 'gallery': if (action.items) { // update the state with the found items return { items: action.items }; } break; case 'photo': if (action.item) { // update the state with the selected photo item return { photo: action.item }; } break; default: break; } } // ...

Actions can have payloads other than the action's type, which the app state might need to be updated with. For example, when a 'SEARCH' action succeeds, a 'SEARCH_SUCCESS' action can be emitted with the items from the search result:

// ... fetchJsonp( `https://api.flickr.com/services/feeds/photos_public.gne?lang=en-us&format=json&tags=${encodedQuery}`, { jsonpCallback: 'jsoncallback' }) .then(res => res.json()) .then(data => { this.transition({ type: 'SEARCH_SUCCESS', items: data.items }); }) .catch(error => { this.transition({ type: 'SEARCH_FAILURE' }); }); // ...

The command() method above will immediately return any extended state (i.e., state other than the finite state) that this.state should be updated with in this.setState(...), along with the finite state change.

The final machine-controlled app

Since we've declaratively configured the finite state machine for the app, we can render the proper UI in a cleaner way by conditionally rendering based on the current finite state:

// ... render() { const galleryState = this.state.gallery; return ( <div className="ui-app" data-state={galleryState}> {this.renderForm(galleryState)} {this.renderGallery(galleryState)} {this.renderPhoto(galleryState)} </div> ); } // ...

The final result:

See the Pen Gallery app with Finite State Machines by David Khourshid (@davidkpiano) on CodePen.

Finite state in CSS

You might have noticed data-state={galleryState} in the code above. By setting that data-attribute, we can conditionally style any part of our app using an attribute selector:

.ui-app { // ... &[data-state="start"] { justify-content: center; } &[data-state="loading"] { .ui-item { opacity: .5; } } }

This is preferable to using className because you can enforce the constraint that only a single value at a time can be set for data-state, and the specificity is the same as using a class. Attribute selectors are also supported in most popular CSS-in-JS solutions.

Advantages and resources

Using finite state machines for describing the behavior of complex applications is nothing new. Traditionally, this was done with switch and goto statements, but by describing finite state machines as a declarative mapping between states, actions, and next states, you can use that data to visualize the state transitions:

Furthermore, using declarative finite state machines allows you to:

  • Store, share, and configure application logic anywhere - similar components, other apps, in databases, in other languages, etc.
  • Make collaboration easier with designers and project managers
  • Statically analyze and optimize state transitions, including states that are impossible to reach
  • Easily change application logic without fear
  • Automate integration tests
Conclusion and takeaways

Finite state machines are an abstraction for modeling the parts of your app that can be represented as finite states, and almost all apps have those parts. The FSM coding patterns presented in this article:

  • Can be used with any existing state management setup; e.g., Redux or MobX
  • Can be adapted to any framework (not just React), or no framework at all
  • Are not written in stone; the developer can adapt the patterns to their coding style
  • Are not applicable to every single situation or use-case

From now on, when you encounter "boolean flag" variables such as isLoaded or isSuccess, I encourage you to stop and think about how your app state can be modeled as a finite state machine instead. That way, you can refactor your app to represent state as state === 'loaded' or state === 'success', using enumerated states in place of boolean flags.


I gave a talk at React Rally 2017 about using finite automata and statecharts to create better user interfaces, if you want to learn more about the motivation and principles:

Slides: Infinitely Better UIs with Finite Automata

Here are some further resources:

Robust React User Interfaces with Finite State Machines is a post from CSS-Tricks

Discover The Fatwigoo

Sun, 11/12/2017 - 16:08

When you use a bit of inline <svg> and you don't set height and width, but you do set a viewBox, that's a fitwigoo. I love the name.

The problem with fatwigoo's is that the <svg> will size itself like a block-level element, rendering enormously until the CSS comes in and (likely) has sizing rules to size it into place.

It's one of those things where if you develop with pretty fast internet, you might not ever see it. But if you're somewhere where the internet is slow or has high latency (or if you're Karl Dubost and literally block CSS), you'll probably see it all the time.

I was an offender before I learned how obnoxious this is. At first, it felt weird to size things in HTML rather than CSS. My solution now is generally to leave sensible defaults on inline SVG (probably icons) like height="20" width="20" and still do my actual sizing in CSS.

Direct Link to ArticlePermalink

Discover The Fatwigoo is a post from CSS-Tricks

Grid areas and element that occupies them aren’t necessarily the same size.

Fri, 11/10/2017 - 19:00

That's a good little thing to know about CSS grid.

I'm sure that is obvious to many of you, but I'm writing this because it was very much not obvious to me for far too long.

Let's take a close look.

There are two players to get into your mind here:

  1. The grid area, as created by the parent element with display: grid;
  2. The element itself, like a <div>, that goes into that grid area.

For example, say we set up a mega simple grid like this:

.grid { display: grid; grid-template-columns: 1fr 1fr; grid-gap: 1rem; }

If we put four grid items in there, here's what it looks like when inspecting it in Firefox DevTools:

Now let's target one of those grid items and give it a background-color:

The grid item and the element are the same size!

There is a very specific reason for that though. It's because the default value for justify-items and align-items is stretch. The value of stretch literally stretches the item to fill the grid area.

But there are several reasons why the element might not fill a grid area:

  1. On the grid parent, justify-items or align-items is some non-stretch value.
  2. On the grid element, align-self or justify-self is some non-stretch value.
  3. On the grid element, if height or width is constrained.

Check it:

Who cares?

I dunno it just feels useful to know that when placing an element in a grid area, that's just the starting point for layout. It'll fill the area by default, but it doesn't have to. It could be smaller or bigger. It could be aligned into any of the corners or centered.

Perhaps the most interesting limitation is that you can't target the grid area itself. If you want to take advantage of alignment, for example, you're giving up the promise of filling the entire grid area. So you can't apply a background and know it will cover that whole grid area anymore. If you need to take advantage of alignment and apply a covering background, you'll need to leave it to stretch, make the new element display: grid; also, and use that for alignemnt.

Grid areas and element that occupies them aren’t necessarily the same size. is a post from CSS-Tricks

Adapting JavaScript Abstractions Over Time

Fri, 11/10/2017 - 18:17

Even if you haven't read my post The Importance Of JavaScript Abstractions When Working With Remote Data, chances are you're already convinced that maintainability and scalability are important for your project and the way toward that is introducing abstractions.

For the purposes of this post, let's assume that an abstraction, in JavaScript, is a module.

The initial implementation of a module is only the beginning of the long (and hopefully lasting) process of their life-being. I see 3 major events in the lifecycle of a module:

  1. Introduction of the module. The initial implementation and the process of re-using it around the project.
  2. Changing the module. Adapting the module over time.
  3. Removing the module.

In my previous post the emphasis was just on that first one. In this article, think more about that second one.

Handling changes to a module is a pain point I see frequently. Compared to introducing the module, the way developers maintain or change it is equally or even more important for keeping the project maintainable and scalable. I've seen a well-written and abstracted module completely ruined over time by changes. I've sometimes been the one who has made those disastrous changes!

When I say disastrous, I mean disastrous from a maintainability and scalability perspective. I understand that from the perspective of approaching deadlines and releasing features which must work, slowing down to think about all the potential image of your change isn't always an option.

The reasons why a developer's changes might not be as optimal are countless. I'd like to stress one in particular:

The Skill of Making Changes in Maintainable Manner

Here's a way you can start making changes like a pro.

Let's start with a code example: an API module. I choose this because communicating with an external API is one of the first fundamental abstractions I define when I start a project. The idea is to store all the API related configuration and settings (like the base URL, error handling logic, etc) in this module.

Let's introduce only one setting, API.url, one private method, API._handleError(), and one public method, API.get():

class API { constructor() { this.url = 'http://whatever.api/v1/'; } /** * Fetch API's specific way to check * whether an HTTP response's status code is in the successful range. */ _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); } /** * Get data abstraction * @return {Promise} */ get(_endpoint) { return window.fetch(this.url + _endpoint, { method: 'GET' }) .then(this._handleError) .then( res => res.json()) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

In this module, our only public method, API.get() returns a Promise. In all places where we need to get remote data, instead of directly calling the Fetch API via window.fetch(), we use our API module abstraction. For example to get user's info API.get('user') or the current weather forecast API.get('weather'). The important thing about this implementation is that the Fetch API is not tightly coupled with our code.

Now, let's say that a change request comes! Our tech lead asks us to switch to a different method of getting remote data. We need to switch to Axios. How can we approach this challenge?

Before we start discussing approaches, let's first summarize what stays the same and what changes:

  1. Change: In our public API.get() method:
    • We need to change the window.fetch() call with axios(). And we need to return a Promise again, to keep our implementation consistent. Axios is Promise based. Excellent!
    • Our server's response is JSON. With the Fetch API chain a .then( res => res.json()) statement to parse our response data. With Axios, the response that was provided by the server is under the data property and we don't need to parse it. Therefore, we need to change the .then statement to .then( res => res.data ).
  2. Change: In our private API._handleError method:
    • The ok boolean flag is missing in the object response. However, there is statusText property. We can hook-up on it. If its value is 'OK', then it's all good.

      Side note: yes, having ok equal to true in Fetch API is not the same as having 'OK' in Axios's statusText. But let's keep it simple and, for the sake of not being too broad, leave it as it is and not introduce any advanced error handling.

  3. No change: The API.url stays the same, along with the funky way we catch errors and alert them.

All clear! Now let's drill down to the actual approaches to apply these changes.

Approach 1: Delete code. Write code. class API { constructor() { this.url = 'http://whatever.api/v1/'; // says the same } _handleError(_res) { // DELETE: return _res.ok ? _res : Promise.reject(_res.statusText); return _res.statusText === 'OK' ? _res : Promise.reject(_res.statusText); } get(_endpoint) { // DELETE: return window.fetch(this.url + _endpoint, { method: 'GET' }) return axios.get(this.url + _endpoint) .then(this._handleError) // DELETE: .then( res => res.json()) .then( res => res.data) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

Sounds reasonable enough. Commit. Push. Merge. Done.

However, there are certain cases why this might not be a good idea. Imagine the following happens: after switching to Axios, you find out that there is a feature which doesn't work with XMLHttpRequests (the Axios's interface for getting resource method), but was previously working just fine with Fetch's fancy new browser API. What do we do now?

Our tech lead says, let's use the old API implementation for this specific use-case, and keep using Axios everywhere else. What do you do? Find the old API module in your source control history. Revert. Add if statements here and there. Doesn't sound very good to me.

There must be an easier, more maintainable and scalable way to make changes! Well, there is.

Approach 2: Refactor code. Write Adapters!

There's an incoming change request! Let's start all over again and instead of deleting the code, let's move the Fetch's specific logic in another abstraction, which will serve as an adapter (or wrapper) of all the Fetch's specifics.

For those of you familiar with the Adapter Pattern (also referred to as the Wrapper Pattern), yes, that's exactly where we're headed! See an excellent nerdy introduction here, if you're interested in all the details.

Here's the plan:

Step 1

Take all Fetch specific lines from the API module and refactor them to a new abstraction, FetchAdapter.

class FetchAdapter { _handleError(_res) { return _res.ok ? _res : Promise.reject(_res.statusText); } get(_endpoint) { return window.fetch(_endpoint, { method: 'GET' }) .then(this._handleError) .then( res => res.json()); } }; Step 2

Refactor the API module by removing the parts which are Fetch specific and keep everything else the same. Add FetchAdapter as a dependency (in some manner):

class API { constructor(_adapter = new FetchAdapter()) { this.adapter = _adapter; this.url = 'http://whatever.api/v1/'; } get(_endpoint) { return this.adapter.get(_endpoint) .catch( error => { alert('So sad. There was an error.'); throw new Error(error); }); } };

That's a different story now! The architecture is changed in a way you are able to handle different mechanisms (adapters) for getting resources. Final step: You guessed it! Write an AxiosAdapter!

const AxiosAdapter = { _handleError(_res) { return _res.statusText === 'OK' ? _res : Promise.reject(_res.statusText); }, get(_endpoint) { return axios.get(_endpoint) .then(this._handleError) .then( res => res.data); } };

And in the API module, switch the default adapter to the Axios one:

class API { constructor(_adapter = new /*FetchAdapter()*/ AxiosAdapter()) { this.adapter = _adapter; /* ... */ } /* ... */ };

Awesome! What do we do if we need to use the old API implementation for this specific use-case, and keep using Axios everywhere else? No problem!

// Import your modules however you like, just an example. import API from './API'; import FetchAdapter from './FetchAdapter'; // Uses the AxiosAdapter (the default one) const API = new API(); API.get('user'); // Uses the FetchAdapter const legacyAPI = new API(new FetchAdapter()); legacyAPI.get('user');

So next time you need to make changes to your project, evaluate which approach makes more sense:

  • Delete code. Write code
  • Refactor Code. Write Adapters.

Judge carefully based on your specific use-case. Over-adapter-ifying your codebase and introducing too many abstractions could lead to increasing complexity which isn't good either.

Happy adapter-ifying!

Adapting JavaScript Abstractions Over Time is a post from CSS-Tricks

Text Input with Expanding Bottom Border

Thu, 11/09/2017 - 23:56

Petr Gazarov published a pretty rad little design pattern in his article Text input highlight, TripAdvisor style.

It's a trick! You can't really make an <input> stretch like that, so Petr makes a <span> to sync the value too, which acts as the border itself. The whole thing is a React component.

If you're willing to use a <span contenteditable> instead, you could do the whole thing in CSS!

See the Pen Outline bottom by Chris Coyier (@chriscoyier) on CodePen.

Although that also means no placeholder.

Direct Link to ArticlePermalink

Text Input with Expanding Bottom Border is a post from CSS-Tricks

CSS Code Smells

Thu, 11/09/2017 - 14:25

Every week(ish) we publish the newsletter which contains the best links, tips, and tricks about web design and development. At the end, we typically write about something we've learned in the week. That might not be directly related to CSS or front-end development at all, but they're a lot of fun to share. Here's an example of one those segments from the newsletter where I ramble on about code quality and dive into what I think should be considered a code smell when it comes to the CSS language.

A lot of developers complain about CSS. The cascade! The weird property names! Vertical alignment! There are many strange things about the language, especially if you're more familiar with a programming language like JavaScript or Ruby.

However, I think the real problem with the CSS language is that it's simple but not easy. What I mean by that is that it doesn't take much time to learn how to write CSS but it takes extraordinary effort to write "good" CSS. Within a week or two, you can probably memorize all the properties and values and make really beautiful designs in the browser without any plugins or dependencies and wow all you're friends. But that's not what I mean by "good CSS."

In an effort to define what that is I've been thinking a lot lately about how we can identify what bad CSS is first. In other areas of programming, developers tend to talk of code smells when they describe bad code; hints in a program that identify that, hey, maybe this thing you've written isn't a good idea. It could be something simple like a naming convention or a particularly fragile bit of code.

In a similar vein, below is my own list of code smells that I think will help us identify bad design and CSS. Note that these points are related to my experience in building large scale design systems in complex apps, so please take this all with a grain of salt.

Code smell #1: The fact you're writing CSS in the first place

A large team will likely already have a collection of tools and systems in place to create things like buttons or styles to move elements around in a layout so the simple fact that you're about to write CSS is probably a bad idea. If you're just about to write custom CSS for a specific edge case then stop! You probably need to do one of the following:

  1. Learn how the current system works and why it has the constraints it does and stick to those constraints
  2. Rethink the underlying infrastructure of the CSS

I think this approach was perfectly described here:

About the false velocity of “quick fixes”. pic.twitter.com/91jauLyEJ3

— Pete Lacey (@chopeh) November 2, 2017

Code smell #2: File Names and Naming Conventions

Let's say you need to make a support page for your app. First thing you probably do is make a CSS file called `support.scss` and start writing code like this:

.support { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

So the problem here isn't necessarily the styles themselves but the concept of a 'support page' in the first place. When we write CSS we need to think in much larger abstractions — we need to think in templates or components instead of the specific content the user needs to see on a page. That way we can reuse something like a "card" over and over again on every page, including that one instance we need for the support page:

.card { background-color: #efefef; max-width: 600px; border: 2px solid #bbb; }

This is already a little better! (My next question would be what is a card, what content can a card have inside it, when is it not okay to use a card, etc etc. – these questions will likely challenge the design and keep you focused.)

Code smell #3: Styling HTML elements

In my experience, styling a HTML element (like a section or a paragraph tag) almost always means that we're writing a hack. There's only one appropriate time to style a HTML element directly like this:

section { display: block; } figure { margin-bottom: 20px; }

And that is in the applications global so-called "reset styles". Otherwise, we're making our codebase fractured and harder to debug because we have no idea whether or not those styles are hacks for a specific purpose or whether they define the defaults for that HTML element.

Code smell #4: Identing code

Indenting Sass code so that child components sit within a parent element is almost always a code smell and a sure sign that this design needs to be refactored. Here's one example:

.card { display: flex; .header { font-size: 21px; } }

In this example are we saying that you can only use a .header class inside a .card? Or are we overriding another block of CSS somewhere else deep within our codebase? The fact that we even have to ask questions like this shows the biggest problem here: we have now sown doubt into the codebase. To really understand how this code works I have to have knowledge of other bits of code. And if I have to ask questions about why this code exists or how it works then it is probably either too complicated or unmaintainable for the future.

This leads to the fifth code smell...

Code smell #5: Overriding CSS

In an ideal world we have a reset CSS file that styles all our default elements and then we have separate individual CSS files for every button, form input and component in our application. Our code should be, at most, overridden by the cascade once. First, this makes our overall code more predictable and second, makes our component code (like button.scss) super readable. We now know that if we need to fix something we can open up a single file and those changes are replicated throughout the application in one fell swoop. When it comes to CSS, predictability is everything.

In that same CSS Utopia, we would then perhaps make it impossible to override certain class names with something like CSS Modules. That way we can't make mistakes by accident.

Code smell #6: CSS files with more than 50 lines of code in them

The more CSS you write the more complicated and fragile the codebase becomes. So whenever I get to around ~50 lines of CSS I tend to rethink what I'm designing by asking myself a couple of questions. Starting and ending with: "is this a single component, or can we break it up into separate parts that work independently from one another?"

That's a difficult and time-consuming process to be practicing endlessly but it leads to a solid codebase and it trains you to write really good CSS.

Wrapping up

I suppose I now have another question, but this time for you: what do you see as a code smell in CSS? What is bad CSS? What is really good CSS? Make sure to add a comment below!

CSS Code Smells is a post from CSS-Tricks


Thu, 11/09/2017 - 14:25

(This is a sponsored post.)

Let's say you're trying to report a bug and you want to do the best possible job you can. Maybe it's even your job! Say, you're logging the bug for your own development team to fix. You want to provide as much detail and context as you can, without being overwhelming.

You know what works pretty well? Short videos.

Even better, video plus details about the context, like the current browser, platform, and version.

But if you really wanna knock it out of the park, how about those things plus replayable network traffic and JavaScript logs? That's exactly what BugReplay does.

BugReplay has a browser extension you install, and you just click the button to record a session with everything I mentioned: video, context, and logs. When you're done, you have a URL with all that information easily watchable. Here's a demo.

A developer looking at a recording like this will be able to see what's going on in the video, check the HTTP requests, see JavaScript exceptions and console logs, and even more contextual data like whether cookies were enabled or not.

Here's a video on how it all works:

Take these recorded sessions and add them to your GitHub or Jira tickets, share them in Slack, or however your team communicates.

Even if BugReplay just did video, it would be impressive in how quickly and easily it works. Add to that all the contextual information, team dashboard, and real-time logging, and it's quite impressive!

Direct Link to ArticlePermalink

​BugReplay is a post from CSS-Tricks

The All-Powerful Sketch

Wed, 11/08/2017 - 21:26

Sketch is such a massive player in the screen design tooling world. Over on the Media Temple blog I take a stab at some of the reasons I think that might be.

Direct Link to ArticlePermalink

The All-Powerful Sketch is a post from CSS-Tricks

ARIA is Spackle, Not Rebar

Wed, 11/08/2017 - 15:05

Much like their physical counterparts, the materials we use to build websites have purpose. To use them without understanding their strengths and limitations is irresponsible. Nobody wants to live in an poorly-built house. So why are poorly-built websites acceptable?

In this post, I'm going to address WAI-ARIA, and how misusing it can do more harm than good.

Materials as technology

In construction, spackle is used to fix minor defects on interiors. It is a thick paste that dries into a solid surface that can be sanded smooth and painted over. Most renters become acquainted with it when attempting to get their damage deposit back.

Rebar is a lattice of steel rods used to reinforce concrete. Every modern building uses it—chances are good you'll see it walking past any decent-sized construction site.

Technology as materials

HTML is the rebar-reinforced concrete of the web. To stretch the metaphor, CSS is the interior and exterior decoration, and JavaScript is the wiring and plumbing.

Every tag in HTML has what is known as native semantics. The act of writing an HTML element programmatically communicates to the browser what that tag represents. Writing a button tag explicitly tells the browser, "This is a button. It does buttony things."

The reason this is so important is that assistive technology hooks into native semantics and uses it to create an interface for navigation. A page not described semantically is a lot like a building without rooms or windows: People navigating via a screen reader have to wander around aimlessly in the dark and hope they stumble onto what they need.

ARIA stands for Accessible Rich Internet Applications and is a relatively new specification developed to help assistive technology better communicate with dynamic, JavaScript-controlled content. It is intended to supplement existing semantic attributes by providing enhanced interactivity and context to screen readers and other assistive technology.

Using spackle to build walls

A concerning trend I've seen recently is the blind, mass-application of ARIA. It feels like an attempt by developers to conduct accessibility compliance via buckshot—throw enough of something at a target trusting that you'll eventually hit it.

Unfortunately, there is a very real danger to this approach. Misapplied ARIA has the potential to do more harm than good.

The semantics inherent in ARIA means that when applied improperly it can create a discordant, contradictory mess when read via screen reader. Instead of hearing, "This is a button. It does buttony things.", people begin to hear things along the lines of, "This is nothing, but also a button. But it's also a deactivated checkbox that is disabled and it needs to shout that constantly."

If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.
First rule of ARIA use

In addition, ARIA is a new technology. This means that browser support and behavior is varied. While I am optimistic that in the future the major browsers will have complete and unified support, the current landscape has gaps and bugs.

Another important consideration is who actually uses the technology. Compliance isn't some purely academic vanity metric we're striving for. We're building robust systems for real people that allow them to get what they want or need with as little complication as possible. Many people who use assistive technology are reluctant to upgrade for fear of breaking functionality. Ever get irritated when your favorite program redesigns and you have to re-learn how to use it? Yeah.

The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.
– Tim Berners-Lee

It feels disingenuous to see the benefits of the DRY principal of massive JavaScript frameworks also slather redundant and misapplied attributes in their markup. The web is accessible by default. For better or for worse, we are free to do what we want to it after that.

The fix

This isn't to say we should completely avoid using ARIA. When applied with skill and precision, it can turn a confusing or frustrating user experience into an intuitive and effortless one, with far fewer brittle hacks and workarounds.

A little goes a long way. Before considering other options, start with markup that semantically describes the content it is wrapping. Test extensively, and only apply ARIA if deficiencies between HTML's native semantics and JavaScript's interactions arise.

Development teams will appreciate the advantage of terse code that's easier to maintain. Savvy developers will use a CSS-Trick™ and leverage CSS attribute selectors to create systems where visual presentation is tied to semantic meaning.

input:invalid, [aria-invalid] { border: 4px dotted #f64100; } Examples

Here are a few of the more common patterns I've seen recently, and why they are problematic. This doesn't mean these are the only kinds of errors that exist, but it's a good primer on recognizing what not to do:

<li role="listitem">Hold the Bluetooth button on the speaker for three seconds to make the speaker discoverable</li>

The role is redundant. The native semantics of the li element already describe it as a list item.

<p role="command">Type CTRL+P to print

command is an Abstract Role. They are only used in ARIA to help describe its taxonomy. Just because an ARIA attribute seems like it is applicable doesn't mean it necessarily is. Additionally, the kbd tag could be used on "CTRL" and "P" to more accurately describe the keyboard command.

<div role="button" class="button">Link to device specifications</div>

Failing to use a button tag runs the risk of not accommodating all the different ways a user can interact with a button and how the browser responds. In addition, the a tag should be used for links.

<body aria-live="assertive" aria-atomic="true">

Usually the intent behind something like this is to expose updates to the screen reader user. Unfortunately, when scoped to the body tag, any page change—including all JS-related updates—are announced immediately. A setting of assertive on aria-live also means that each update interrupts whatever it is the user is currently doing. This is a disastrous experience, especially for single page apps.

<div aria-checked="true"></div>

You can style a native checkbox element to look like whatever you want it to. Better support! Less work!

<div role="link" tabindex="40"> Link text </div>

Yes, it's actual production code. Where to begin? First, never use a tabindex value greater than 0. Secondly, the title attribute probably does not do what you think it does. Third, the anchor tag should have a destination—links take you places, after all. Fourth, the role of link assigned to a div wrapping an a element is entirely superfluous.

<h2 class="h3" role="heading" aria-level="1">How to make a perfect soufflé every time</h2>

Credit is where credit's due: Nicolas Steenhout outlines the issues for this one.

Do better

Much like content, markup shouldn't be an afterthought when building a website. I believe most people are genuinely trying to do their best most of the time, but wielding a technology without knowing its implications is dangerous and irresponsible.

I'm usually more of a honey-instead-of-vinegar kind of person when I try to get people to practice accessibility, but not here. This isn't a soft sell about the benefits of developing and designing with an accessible, inclusive mindset. It's a post about doing your job.

Every decision a team makes affects a site's accessibility.
Laura Kalbag

Get better at authoring

Learn about the available HTML tags, what they describe, and how to best use them. Same goes for ARIA. Give your page template semantics the same care and attention you give your JavaScript during code reviews.

Get better at testing

There's little excuse to not incorporate a screen reader into your testing and QA process. NVDA is free. macOS, Windows, iOS and Android all come with screen readers built in. Some nice people have even written guides to help you learn how to use them.

Automated accessibility testing is a huge boon, but it also isn't a silver bullet. It won't report on what it doesn't know to report, meaning it's up to a human to manually determine if navigating through the website makes sense. This isn't any different than other usability testing endeavors.

Build better buildings

Universal Design teaches us that websites, like buildings, can be both beautiful and accessible. If you're looking for a place to start, here are some resources:

ARIA is Spackle, Not Rebar is a post from CSS-Tricks

“a more visually-pleasing focus”

Wed, 11/08/2017 - 15:03

There is a JavaScript library, Focusingly, that says:

With Focusingly, focus styling adapts to match and fit individual elements.

No configuration required, Focusingly figures it out. The result is a pleasingly tailored effect that subtly draws the eye.

The idea is that if a link color (or whatever focusable element) is red, the outline will be red too, instead of that fuzzy light blue which might be undesirable aesthetically.

Why JavaScript? I'm not sure exactly. Matt Smith made a demo that shows that the outline color inherits from the color, which yields the same result.

a:focus { outline: 1px solid; outline-offset: .15em; }

Direct Link to ArticlePermalink

“a more visually-pleasing focus” is a post from CSS-Tricks

Building Flexible Design Systems

Tue, 11/07/2017 - 21:46

Yesenia Perez-Cruz talks about design systems that aren't just, as she puts it, Lego bricks for piecing layouts together. Yesenia is Design Director at Vox, which is a parent to many very visually different brands, so you can see how a single inflexible design system might fall over.

Successful design patterns don't exist in a vacuum.

Direct Link to ArticlePermalink

Building Flexible Design Systems is a post from CSS-Tricks

“almost everything on computers is perceptually slower than it was in 1983”

Tue, 11/07/2017 - 16:59

Good rant. Thankfully it's a tweetstorm not some readable blog post. &#x1f609;

I think about this kind of thing with cable box TV UX. At my parent's house, changing the channel takes like 4-5 seconds for the new channel to come in with all the overlays and garbage. You used to be able to turn a dial and the new channel was instantly there.

You'd like to think performance is a steady march forward. Computers are so fast these days! But it might just be a steady march backward.

Direct Link to ArticlePermalink

“almost everything on computers is perceptually slower than it was in 1983” is a post from CSS-Tricks

The Contrast Swap Technique: Improved Image Performance with CSS Filters

Tue, 11/07/2017 - 14:53

With CSS filter effects and blend modes, we can now leverage various techniques for styling images directly in the browser. However, creating aesthetic theming isn't all that filter effects are good for. You can use filters to indicate hover state, hide passwords, and now—for web performance.

While playing with profiling performance wins of using blend modes for duotone image effects (I'll write up an article on this soon), I discovered something even more exciting. A major image optimization win! The idea is to reduce image contrast in the source image, reducing its file size, then boosting the contrast back up with CSS filters!

Start with your image, then remove the contrast, and then reapply it with CSS filters. How It Works

Let's put a point on exactly how this works:

  1. Reduce image contrast using a linear transform function (Photoshop can do this)
  2. Apply a contrast filter in CSS to the image to make up for the contrast removal

Step one involves opening your image in a program that lets you linearly reduce contrast in a linear way. Photoshop's legacy mode does a good job at this (Image > Adjustments > Brightness/Contrast):

You get to this screen via Image > Adjustments > Brightness/Contrast in Photoshop CC.

Not all programs use the same functions to apply image transforms (for example, this would not work with the macOS default image editor, since it uses a different technique to reduct contrast). A lot of the work done to build image effects into the browser was initially done by Adobe, so it makes sense that Photoshop's Legacy Mode aligns with browser image effects.

Then, we apply some CSS filters to our image. The filters we'll be using are contrast and (a little bit of) brightness. With the 50% Legacy Photoshop reduction, I applied filter: contrast(1.75) brightness(1.2); to each image.

Major Savings

This technique is very effective for reducing image size and therefore the overall weight of your page. In the following study, I used 4 vibrant photos taken on an iPhone, applied a 50% reduction in contrast using Photoshop Legacy Mode, saved each photo at Maximum quality (10), and then applied filter: contrast(1.75) brightness(1.2); to each image. These are the results:

You can play with the live demo here to check it out for yourself!

In each of the above cases, we saved between 23% and 28% in image size by reducing and reapplying the contrast using CSS filters. This is with saving each of the images at maximum quality.

If you look closely, you can see some legitimate losses in image quality. This is especially true with majority-dark images. so this technique is not perfect, but it definitely proves image savings in an interesting way.

Browser Support Considerations

Be aware that browser support for CSS filters is "pretty good".

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari18*15*35No166*Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox6.0-6.1*37*No4.4*6156

As you can see, Internet Explorer and Opera Mini lack support. Edge 16 (the current latest version) supports CSS filters and this technique works like a charm. You'll have to decide if a reduced-contrast image as a fallback is acceptable or not.

What About Repainting?

You may be thinking: "but while we're saving in image size, we're putting more work on the browser, wouldn't this affect performance?" That's a great question! CSS filters do trigger a repaint because they set off window.getComputedStyle(). Let's profile our example.

What I did was open an incognito window in Chrome, disable JavaScript (just to be certain for the extensions I have), set the network to "Slow 3G" and set the CPU to a 6x slowdown:

With a 6x CPU slowdown, the longest Paint Raster took 0.27 ms, AKA 0.00027 seconds.

While the images took a while to load in, the actual repaint was pretty quick. With a 6x CPU slowdown, the longest individual Rasterize Paint took 0.27 ms, AKA 0.00027 seconds.

CSS filters originated from SVG filters, and are relatively browser optimized versions of the most popular SVG filter effect transformations. So I think its pretty safe to use as progressive enhancement at this point (being aware of IE users and Opera Mini users!).

Conclusion and the Future

There are still major savings to be had when reducing image quality (again, in this small study, the images were saved at high qualities for more of a balanced result). Running images through optimizers like ImageOptim, and sending smaller image file sizes based on screen sized (like responsive images in HTML or CSS) will give you even bigger savings.

In the web performance optimization world, I find image performance the most effective thing we can do to reduce web cruft and data for our users, since images are the largest chunk of what we send on the web (by far). If we can start leveraging modern CSS to help lift some of the weight of our images, we can look into a whole new world of optimization solutions.

For example, this could potentially be taken even further, playing with other CSS filters such as saturate and brightness. We could leverage automation tools like Gulp and Webpack to apply the image effects for us, just as we use automation tools to run our images through optimizers. Blending this technique with other best practices for image optimization, can lead to major savings in the pixel-based assets we're sending our users.

The Contrast Swap Technique: Improved Image Performance with CSS Filters is a post from CSS-Tricks

Designing Tables to be Read, Not Looked At

Tue, 11/07/2017 - 14:52

Richard Rutter, in support of his new book Web Typography, shares loads of great advice on data table design. Here's a good one:

You might consider making all the columns an even width. This too does nothing for the readability of the contents. Some table cells will be too wide, leaving the data lost and detached from its neighbours. Other table cells will be too narrow, cramping the data uncomfortably. Table columns should be sized according to the data they contain.

I was excited to be reminded of the possibility for aligning numbers with decimals:

td { text-align: "." center; }

But the support for that is non-existent as best I can tell. Another tip, using font-variant-numeric: lining-nums tabular-nums; does have some support.

Tables can be beautiful but they are not works of art. Instead of painting and decorating them, design tables for your reader.

Direct Link to ArticlePermalink

Designing Tables to be Read, Not Looked At is a post from CSS-Tricks

Flexbox and Grids, your layout’s best friends

Mon, 11/06/2017 - 19:53

Eva Ferreira smacks down a few myths about CSS grid before going on to demonstrate some of the concepts of each:

❌ Grids arrived to kill Flexbox.
❌ Flexbox is Grid’s fallback.

Some more good advice about prototyping:

The best way to begin thinking about a grid structure is to draw it on paper; you’ll be able to see which are the columns, rows, and gaps you’ll be working on. Doodling on paper doesn’t take long and it will give you a better understanding of the overall grid.

These days, if you can draw a layout, you can probably get it done for real.

Direct Link to ArticlePermalink

Flexbox and Grids, your layout’s best friends is a post from CSS-Tricks

input type=’country’

Mon, 11/06/2017 - 18:56

Terence Eden looks into the complexity behind adding a new type of HTML input that would allow users to select a country from a list, as per a suggestion from Lea Verou. Lea suggested it could be as simple as this:

<input type='country'>

And then, voilà! An input with a list of all countries would appear in the browser. But Terence describes just how difficult making the user experience around that one tiny input could be for browser makers:

Let's start with the big one. What is a country? This is about as contentious as it gets! It involves national identities, international politics, and hereditary relationships. Scotland, for example, is a country. That is a (fairly) uncontentious statement - and yet in drop-down lists, I rarely see it mentioned. Why? Because it is one of the four countries which make up the country of the United Kingdom - and so it is usually (but not always) subsumed into that. Some countries don't recognize each other. Some believe that the other country is really part of their country. Some countries don't exist.

Direct Link to ArticlePermalink

input type=’country’ is a post from CSS-Tricks

Creating a Star to Heart Animation with SVG and Vanilla JavaScript

Mon, 11/06/2017 - 15:50

In my previous article, I've shown how to smoothly transition from one state to another using vanilla JavaScript. Make sure you check that one out first because I'll be referencing some things I explained there in a lot of detail, like demos given as examples, formulas for various timing functions or how not to reverse the timing function when going back from the final state of a transition to the initial one.

The last example showcased making the shape of a mouth to go from sad to glad by changing the d attribute of the path we used to draw this mouth.

Manipulating the path data can be taken to the next level to give us more interesting results, like a star morphing into a heart.

The star to heart animation we'll be coding. The idea

Both are made out of five cubic Bézier curves. The interactive demo below shows the individual curves and the points where these curves are connected. Clicking any curve or point highlights it, as well as its corresponding curve/point from the other shape.

See the Pen by thebabydino (@thebabydino) on CodePen.

Note that all of these curves are created as cubic ones, even if, for some of them, the two control points coincide.

The shapes for both the star and the heart are pretty simplistic and unrealistic ones, but they'll do.

The starting code

As seen in the face animation example, I often choose to generate such shapes with Pug, but here, since this path data we generate will also need to be manipulated with JavaScript for the transition, going all JavaScript, including computing the coordinates and putting them into the d attribute seems like the best option.

This means we don't need to write much in terms of markup:

<svg> <path id='shape'/> </svg>

In terms of JavaScript, we start by getting the SVG element and the path element - this is the shape that morphs from a star into a heart and back. We also set a viewBox attribute on the SVG element such that its dimensions along the two axes are equal and the (0,0) point is dead in the middle. This means the coordinates of the top left corner are (-.5*D,-.5*D), where D is the value for the viewBox dimensions. And last, but not least, we create an object to store info about the initial and final states of the transition and about how to go from the interpolated values to the actual attribute values we need to set on our SVG shape.

const _SVG = document.querySelector('svg'), _SHAPE = document.getElementById('shape'), D = 1000, O = { ini: {}, fin: {}, afn: {} }; (function init() { _SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' ')); })();

Now that we got this out of the way, we can move on to the more interesting part!

The geometry of the shapes

The initial coordinates of the end points and control points are those for which we get the star and the final ones are the ones for which we get the heart. The range for each coordinate is the difference between its final value and its initial one. Here, we also rotate the shape as we morph it because we want the star to point up and we change the fill to go from the golden star to the crimson heart.

Alright, but how do we get the coordinates of the end and control points in the two cases?


In the case of the star, we start with a regular pentagram. The end points of our curves are at the intersection between the pentagram edges and we use the pentagram vertices as control points.

Regular pentagram with vertex and edge crossing points highlighted as control points and end points of five cubic Bézier curves (live).

Getting the vertices of our regular pentagram is pretty straightforward given the radius (or diameter) of its circumcircle, which we take to be a fraction of the viewBox size of our SVG (considered here square for simplicity, we're not going for tight packing in this case). But how do we get their intersections?

First of all, let's consider the small pentagon highlighted inside the pentagram in the illustration below. Since the pentagram is regular, the small pentagon whose vertices coincide with the edge intersections of the pentagram is also regular. It also has the same incircle as the pentagram and, therefore, the same inradius.

Regular pentagram and inner regular pentagon share the same incircle (live).

So if we compute the pentagram inradius, then we also have the inradius of the inner pentagon, which, together with the central angle corresponding to an edge of a regular pentagon, allows us to get the circumradius of this pentagon, which in turn allows us to compute its vertex coordinates and these are exactly the edge intersections of the pentagram and the endpoints of our cubic Bézier curves.

Our regular pentagram is represented by the Schläfli symbol {5/2}, meaning that it has 5 vertices, and, given these 5 vertex points equally distributed on its circumcircle, 360°/5 = 72° apart, we start from the first, skip the next point on the circle and connect to the second one (this is the meaning of the 2 in the symbol; 1 would describe a pentagon as we don't skip any points, we connect to the first). And so on - we keep skipping the point right after.

In the interactive demo below, select either pentagon or pentagram to see how they get constructed.

See the Pen by thebabydino (@thebabydino) on CodePen.

This way, we get that the central angle corresponding to an edge of the regular pentagram is twice of that corresponding to the regular pentagon with the same vertices. We have 1·(360°/5) = 1·72° = 72° (or 1·(2·π/5) in radians) for the pentagon versus 2·(360°/5) = 2·72° = 144° (2·(2·π/5) in radians) for the pentagram. In general, given a regular polygon (whether it's a convex or a star polygon doesn't matter) with the Schläfli symbol {p,q}, the central angle corresponding to one of its edges is q·(360°/p) (q·(2·π/p) in radians).

Central angle corresponding to an edge of a regular polygon: pentagram (left, 144°) vs. pentagon (right, 72°) (live).

We also know the pentagram circumradius, which we said we take as a fraction of the square viewBox size. This means we can get the pentagram inradius (which is equal to that of the small pentagon) from a right triangle where we know the hypotenuse (it's the pentagram circumradius) and an acute angle (half the central angle corresponding to the pentagram edge).

Computing the inradius of a regular pentagram from a right triangle where the hypotenuse is the pentagram circumradius and the acute angle between the two is half the central angle corresponding to a pentagram edge (live).

The cosine of half the central angle is the inradius over the circumradius, which gives us that the inradius is the circumradius multiplied with this cosine value.

Now that we have the inradius of the small regular pentagon inside our pentagram, we can compute its circumradius from a similar right triangle having the circumradius as hypotenuse, half the central angle as one of the acute angles and the inradius as the cathetus adjacent to this acute angle.

The illustration below highlights a right triangle formed from a circumradius of a regular pentagon, its inradius and half an edge. From this triangle, we can compute the circumradius if we know the inradius and the central angle corresponding to a pentagon edge as the acute angle between these two radii is half this central angle.

Computing the circumradius of a regular pentagon from a right triangle where it's the hypotenuse, while the catheti are the inradius and half the pentagon edge and the acute angle between the two radii is half the central angle corresponding to a pentagon edge (live).

Remember that, in this case, the central angle is not the same as for the pentagram, it's half of it (360°/5 = 72°).

Good, now that we have this radius, we can get all the coordinates we want. They're the coordinates of points distributed at equal angles on two circles. We have 5 points on the outer circle (the circumcircle of our pentagram) and 5 on the inner one (the circumcircle of the small pentagon). That's 10 points in total, with angles of 360°/10 = 36° in between the radial lines they're on.

The end and control points are distributed on the circumradius of the inner pentagon and on that of the pentagram respectively (live).

We know the radii of both these circles. The radius of the outer one is the regular pentagram circumradius, which we take to be some arbitrary fraction of the viewBox dimension (.5 or .25 or .32 or whatever value we feel would work best). The radius of the inner one is the circumradius of the small regular pentagon formed inside the pentagram, which we can compute as a function of the central angle corresponding to one of its edges and its inradius, which is equal to that of the pentagram and therefore we can compute from the pentagram circumradius and the central angle corresponding to a pentagram edge.

So, at this point, we can generate the path data that draws our star, it doesn't depend on anything that's still unknown.

So let's do that and put all of the above into code!

We start by creating a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the pentagram circumradius from the viewBox size. This function returns an array of coordinates we later use for interpolation.

Within this function, we first compute the constant stuff that won't change as we progress through it - the pentagram circumradius (radius of the outer circle), the central (base) angles corresponding to one edge of a regular pentagram and polygon, the inradius shared by the pentagram and the inner pentagon whose vertices are the points where the pentagram edges cross each other, the circumradius of this inner pentagon and, finally, the total number of distinct points whose coordinates we need to compute and the base angle for this distribution.

After that, within a loop, we compute the coordinates of the points we want and we push them into the array of coordinates.

const P = 5; /* number of cubic curves/ polygon vertices */ function getStarPoints(f = .5) { const RCO = f*D /* outer (pentagram) circumradius */, BAS = 2*(2*Math.PI/P) /* base angle for star poly */, BAC = 2*Math.PI/P /* base angle for convex poly */, RI = RCO*Math.cos(.5*BAS) /*pentagram/ inner pentagon inradius */, RCI = RI/Math.cos(.5*BAC) /* inner pentagon circumradius */, ND = 2*P /* total number of distinct points we need to get */, BAD = 2*Math.PI/ND /* base angle for point distribution */, PTS = [] /* array we fill with point coordinates */; for(let i = 0; i < ND; i++) {} return PTS; }

To compute the coordinates of our points, we use the radius of the circle they're on and the angle of the radial line connecting them to the origin with respect to the horizontal axis, as illustrated by the interactive demo below (drag the point to see how its Cartesian coordinates change):

See the Pen by thebabydino (@thebabydino) on CodePen.

In our case, the current radius is the radius of the outer circle (pentagram circumradius RCO) for even index points (0, 2, ...) and the radius of the inner circle (inner pentagon circumradius RCI) for odd index points (1, 3, ...), while the angle of the radial line connecting the current point to the origin is the point index (i) multiplied with the base angle for point distribution (BAD, which happens to be 36° or π/10 in our particular case).

So within the loop we have:

for(let i = 0; i < ND; i++) { let cr = i%2 ? RCI : RCO, ca = i*BAD, x = Math.round(cr*Math.cos(ca)), y = Math.round(cr*Math.sin(ca)); }

Since we've chosen a pretty big value for the viewBox size, we can safely round the coordinate values so that our code looks cleaner, without decimals.

As for pushing these coordinates into the points array, we do this twice when we're on the outer circle (the even indices case) because that's where we actually have two control points overlapping, but only for the star, so we'll need to move each of these overlapping points into different positions to get the heart.

for(let i = 0; i < ND; i++) { /* same as before */ PTS.push([x, y]); if(!(i%2)) PTS.push([x, y]); }

Next, we put data into our object O. For the path data (d) attribute, we store the array of points we get when calling the above function as the initial value. We also create a function for generating the actual attribute value (the path data string in this case - inserting commands in between the pairs of coordinates, so that the browser knows what to do with those coordinates). Finally, we take every attribute we have stored data for and we set its value to the value returned by the previously mentioned function:

(function init() { /* same as before */ O.d = { ini: getStarPoints(), afn: function(pts) { return pts.reduce((a, c, i) => { return a + (i%3 ? ' ' : 'C') + c }, `M${pts[pts.length - 1]}`) } }; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].ini)) })();

The result can be seen in the Pen below:

See the Pen by thebabydino (@thebabydino) on CodePen.

This is a promising start. However, we want the first tip of the generating pentagram to point down and the first tip of the resulting star to point up. Currently, they're both pointing right. This is because we start from 0° (3 o'clock). So in order to start from 6 o'clock, we add 90° (π/2 in radians) to every current angle in the getStarPoints() function.

ca = i*BAD + .5*Math.PI

This makes the first tip of the generating pentagram and resulting star to point down. To rotate the star, we need to set its transform attribute to a half circle rotation. In order to do so, we first set an initial rotation angle to -180. Afterwards, we set the function that generates the actual attribute value to a function that generates a string from a function name and an argument:

function fnStr(fname, farg) { return `${fname}(${farg})` }; (function init() { /* same as before */ O.transform = { ini: -180, afn: (ang) => fnStr('rotate', ang) }; /* same as before */ })();

We also give our star a golden fill in a similar fashion. We set an RGB array to the initial value in the fill case and we use a similar function to generate the actual attribute value:

(function init() { /* same as before */ O.fill = { ini: [255, 215, 0], afn: (rgb) => fnStr('rgb', rgb) }; /* same as before */ })();

We now have a nice golden SVG star, made up of five cubic Bézier curves:

See the Pen by thebabydino (@thebabydino) on CodePen.


Since we have the star, let's next see how we can get the heart!

We start with two intersecting circles of equal radii, both a fraction (let's say .25 for the time being) of the viewBox size. These circles intersect in such a way that the segment connecting their central points is on the x axis and the segment connecting their intersection points is on the y axis. We also take these two segments to be equal.

We start with two circles of equal radius whose central points are on the horizontal axis and which intersect on the vertical axis (live).

Next, we draw diameters through the upper intersection point and then tangents through the opposite points of these diameters. These tangents intersect on the y axis.

Constructing diameters through the upper intersection point and tangents to the circle at the opposite ends of these diameters, tangents which intersect on the vertical axis (live).

The upper intersection point and the diametrically opposite points make up three of the five end points we need. The other two end points split the outer half circle arcs into two equal parts, thus giving us four quarter circle arcs.

Highlighting the end points of the cubic Bézier curves that make up the heart and the coinciding control points of the bottom one of these curves (live).

Both control points for the curve at the bottom coincide with the intersection of the the two tangents drawn previously. But what about the other four curves? How can we go from circular arcs to cubic Bézier curves?

We don't have a cubic Bézier curve equivalent for a quarter circle arc, but we can find a very good approximation, as explained in this article.

The gist of it is that we start from a quarter circle arc of radius R and draw tangents to the end points of this arc (N and Q). These tangents intersect at P. The quadrilateral ONPQ has all angles equal to 90° (or π/2), three of them by construction (O corresponds to a 90° arc and the tangent to a point of that circle is always perpendicular onto the radial line to the same point) and the final one by computation (the sum of angles in a quadrilateral is always 360° and the other three angles add up to 270°). This makes ONPQ a rectangle. But ONPQ also has two consecutive edges equal (OQ and ON are both radial lines, equal to R in length), which makes it a square of edge R. So the lengths of NP and QP are also equal to R.

Approximating a quarter circle arc with a cubic Bézier curve (live).

The control points of the cubic curve approximating our arc are on the tangent lines NP and QP, at C·R away from the end points, where C is the constant the previously linked article computes to be .551915.

Given all of this, we can now start computing the coordinates of the end points and control points of the cubic curves making up our star.

Due to the way we've chosen to construct this heart, TO0SO1 (see figure below) is a square since it has all edges equal (all are radii of one of our two equal circles) and its diagonals are equal by construction (we said the distance between the central points equals that between the intersection points). Here, O is the intersection of the diagonals and OT is half the ST diagonal. T and S are on the y axis, so their x coordinate is 0. Their y coordinate in absolute value equals the OT segment, which is half the diagonal (as is the OS segment).

The TO0SO1 square (live).

We can split any square of edge length l into two equal right isosceles triangles where the catheti coincide with the square edges and the hypotenuse coincides with a diagonal.

Any square can be split into two congruent right isosceles triangles (live).

Using one of these right triangles, we can compute the hypotenuse (and therefore the square diagonal) using Pythagora's theorem: d² = l² + l². This gives us the square diagonal as a function of the edge d = √(2∙l) = l∙√2 (conversely, the edge as a function of the diagonal is l = d/√2). It also means that half the diagonal is d/2 = (l∙√2)/2 = l/√2.

Applying this to our TO0SO1 square of edge length R, we get that the y coordinate of T (which, in absolute value, equals half this square's diagonal) is -R/√2 and the y coordinate of S is R/√2.

The coordinates of the vertices of the TO0SO1 square (live).

Similarly, the Ok points are on the x axis, so their y coordinates are 0, while their x coordinates are given by the half diagonal OOk: ±R/√2.

TO0SO1 being a square also means all of its angles are 90° (π/2 in radians) angles.

The TAkBkS quadrilaterals (live).

In the illustration above, the TBk segments are diameter segments, meaning that the TBk arcs are half circle, or 180° arcs and we've split them into two equal halves with the Ak points, getting two equal 90° arcs - TAk and AkBk, which correspond to two equal 90° angles, ∠TOkAk and ∠AkOkBk.

Given that ∠TOkS are 90° angles and ∠TOkAk are also 90° angles by construction, it results that the SAk segments are also diameter segments. This gives us that in the TAkBkS quadrilaterals, the diagonals TBk and SAk are perpendicular, equal and cross each other in the middle (TOk, OkBk, SOk and OkAk are all equal to the initial circle radius R). This means the TAkBkS quadrilaterals are squares whose diagonals are 2∙R.

From here we can get that the edge length of the TAkBkS quadrilaterals is 2∙R/√2 = R∙√2. Since all angles of a square are 90° ones and the TS edge coincides with the vertical axis, this means the TAk and SBk edges are horizontal, parallel to the x axis and their length gives us the x coordinates of the Ak and Bk points: ±R∙√2.

Since TAk and SBk are horizontal segments, the y coordinates of the Ak and Bk points equal those of the T (-R/√2) and S (R/√2) points respectively.

The coordinates of the vertices of the TAkBkS squares (live).

Another thing we get from here is that, since TAkBkS are squares, AkBk are parallel with TS, which is on the y (vertical) axis, therefore the AkBk segments are vertical. Additionally, since the x axis is parallel to the TAk and SBk segments and it cuts the TS, it results that it also cuts the AkBk segments in half.

Now let's move on to the control points.

We start with the overlapping control points for the bottom curve.

The TB0CB1 quadrilateral (live).

The TB0CB1 quadrilateral has all angles equal to 90° (∠T since TO0SO1 is a square, ∠Bk by construction since the BkC segments are tangent to the circle at Bk and therefore perpendicular onto the radial lines OkBk at that point; and finally, ∠C can only be 90° since the sum of angles in a quadrilateral is 360° and the other three angles add up to 270°), which makes it a rectangle. It also has two consecutive edges equal - TB0 and TB1 are both diameters of the initial squares and therefore both equal to 2∙R. All of this makes it a square of edge 2∙R.

From here, we can get its diagonal TC - it's 2∙R∙√2. Since C is on the y axis, its x coordinate is 0. Its y coordinate is the length of the OC segment. The OC segment is the TC segment minus the OT segment: 2∙R∙√2 - R/√2 = 4∙R/√2 - R/√2 = 3∙R/√2.

The coordinates of the vertices of the TB0CB1 square (live).

So we now have the coordinates of the two coinciding control points for the bottom curve are (0,3∙R/√2).

In order to get the coordinates of the control points for the other curves, we draw tangents through their endpoints and we get the intersections of these tangents at Dk and Ek.

The TOkAkDk and AkOkBkEk quadrilaterals (live).

In the TOkAkDk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (∠DkTOk and ∠DkAkOk are the angles between the radial and tangent lines at T and Ak respectively, while ∠TOkAk are the angles corresponding to the quarter circle arcs TAk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes TOkAkDk rectangles. Since they have two consecutive edges equal (OkT and OkAk are radial segments of length R), they are also squares.

This means the diagonals TAk and OkDk are R∙√2. We already know that TAk are horizontal and, since the diagonals of a square are perpendicular, it results the OkDk segments are vertical. This means the Ok and Dk points have the same x coordinate, which we've already computed for Ok to be ±R/√2. Since we know the length of OkDk, we can also get the y coordinates - they're the diagonal length (R∙√2) with minus in front.

Similarly, in the AkOkBkEk quadrilaterals, we have that all angles are 90° (right) angles, three of them by construction (∠EkAkOk and ∠EkBkOk are the angles between the radial and tangent lines at Ak and Bk respectively, while ∠AkOkBk are the angles corresponding to the quarter circle arcs AkBk) and the fourth by computation (the sum of angles in a quadrilateral is 360° and the other three add up to 270°). This makes AkOkBkEk rectangles. Since they have two consecutive edges equal (OkAk and OkBk are radial segments of length R), they are also squares.

From here, we get the diagonals AkBk and OkEk are R∙√2. We know the AkBk segments are vertical and split into half by the horizontal axis, which means the OkEk segments are on this axis and the y coordinates of the Ek points are 0. Since the x coordinates of the Ok points are ±R/√2 and the OkEk segments are R∙√2, we can compute those of the Ek points as well - they're ±3∙R/√2.

The coordinates of the newly computed vertices of the TOₖAₖDₖ and AₖOₖBₖEₖ squares (live).

Alright, but these intersection points for the tangents are not the control points we need to get the circular arc approximations. The control points we want are on the TDk, AkDk, AkEk and BkEk segments at about 55% (this value is given by the constant C computed in the previously mentioned article) away from the curve end points (T, Ak, Bk). This means the segments from the endpoints to the control points are C∙R.

In this situation, the coordinates of our control points are 1 - C of those of the end points (T, Ak and Bk) plus C of those of the points where the tangents at the end points intersect (Dk and Ek).

So let's put all of this into JavaScript code!

Just like in the star case, we start with a getStarPoints(f) function which depends on an arbitrary factor (f) that's going to help us get the radius of the helper circles from the viewBox size. This function also returns an array of coordinates we later use for interpolation.

Inside, we compute the stuff that doesn't change throughout the function. First off, the radius of the helper circles. From that, the half diagonal of the small squares whose edge equals this helper circle radius, half diagonal which is also the circumradius of these squares. Afterwards, the coordinates of the end points of our cubic curves (the T, Ak, Bk points), in absolute value for the ones along the horizontal axis. Then we move on to the coordinates of the points where the tangents through the end points intersect (the C, Dk, Ek points). These either coincide with the control points (C) or can help us get the control points (this is the case for Dk and Ek).

function getHeartPoints(f = .25) { const R = f*D /* helper circle radius */, RC = Math.round(R/Math.SQRT2) /* circumradius of square of edge R */, XT = 0, YT = -RC /* coords of point T */, XA = 2*RC, YA = -RC /* coords of A points (x in abs value) */, XB = 2*RC, YB = RC /* coords of B points (x in abs value) */, XC = 0, YC = 3*RC /* coords of point C */, XD = RC, YD = -2*RC /* coords of D points (x in abs value) */, XE = 3*RC, YE = 0 /* coords of E points (x in abs value) */; }

The interactive demo below shows the coordinates of these points on click:

See the Pen by thebabydino (@thebabydino) on CodePen.

Now we can also get the control points from the end points and the points where the tangents through the end points intersect:

function getHeartPoints(f = .25) { /* same as before */ const /* const for cubic curve approx of quarter circle */ C = .551915, CC = 1 - C, /* coords of ctrl points on TD segs */ XTD = Math.round(CC*XT + C*XD), YTD = Math.round(CC*YT + C*YD), /* coords of ctrl points on AD segs */ XAD = Math.round(CC*XA + C*XD), YAD = Math.round(CC*YA + C*YD), /* coords of ctrl points on AE segs */ XAE = Math.round(CC*XA + C*XE), YAE = Math.round(CC*YA + C*YE), /* coords of ctrl points on BE segs */ XBE = Math.round(CC*XB + C*XE), YBE = Math.round(CC*YB + C*YE); /* same as before */ }

Next, we need to put the relevant coordinates into an array and return this array. In the case of the star, we started with the bottom curve and then went clockwise, so we do the same here. For every curve, we push two sets of coordinates for the control points and then one set for the point where the current curve ends.

See the Pen by thebabydino (@thebabydino) on CodePen.

Note that in the case of the first (bottom) curve, the two control points coincide, so we push the same pair of coordinates twice. The code doesn't look anywhere near as nice as in the case of the star, but it will have to suffice:

return [ [XC, YC], [XC, YC], [-XB, YB], [-XBE, YBE], [-XAE, YAE], [-XA, YA], [-XAD, YAD], [-XTD, YTD], [XT, YT], [XTD, YTD], [XAD, YAD], [XA, YA], [XAE, YAE], [XBE, YBE], [XB, YB] ];

We can now take our star demo and use the getHeartPoints() function for the final state, no rotation and a crimson fill instead. Then, we set the current state to the final shape, just so that we can see the heart:

function fnStr(fname, farg) { return `${fname}(${farg})` }; (function init() { _SVG.setAttribute('viewBox', [-.5*D, -.5*D, D, D].join(' ')); O.d = { ini: getStarPoints(), fin: getHeartPoints(), afn: function(pts) { return pts.reduce((a, c, i) => { return a + (i%3 ? ' ' : 'C') + c }, `M${pts[pts.length - 1]}`) } }; O.transform = { ini: -180, fin: 0, afn: (ang) => fnStr('rotate', ang) }; O.fill = { ini: [255, 215, 0], fin: [220, 20, 60], afn: (rgb) => fnStr('rgb', rgb) }; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p].fin)) })();

This gives us a nice looking heart:

See the Pen by thebabydino (@thebabydino) on CodePen.

Ensuring consistent shape alignment

However, if we place the two shapes one on top of the other with no fill or transform, just a stroke, we see the alignment looks pretty bad:

See the Pen by thebabydino (@thebabydino) on CodePen.

The easiest way to solve this issue is to shift the heart up by an amount depending on the radius of the helper circles:

return [ /* same coords */ ].map(([x, y]) => [x, y - .09*R])

We now have much better alignment, regardless of how we tweak the f factor in either case. This is the factor that determines the pentagram circumradius relative to the viewBox size in the star case (when the default is .5) and the radius of the helper circles relative to the same viewBox size in the heart case (when the default is .25).

See the Pen by thebabydino (@thebabydino) on CodePen.

Switching between the two shapes

We want to go from one shape to the other on click. In order to do this, we set a direction dir variable which is 1 when we go from star to heart and -1 when we go from heart to star. Initially, it's -1, as if we've just switched from heart to star.

Then we add a 'click' event listener on the _SHAPE element and code what happens in this situation - we change the sign of the direction (dir) variable and we change the shape's attributes so that we go from a golden star to a crimson heart or the other way around:

let dir = -1; (function init() { /* same as before */ _SHAPE.addEventListener('click', e => { dir *= -1; for(let p in O) _SHAPE.setAttribute(p, O[p].afn(O[p][dir > 0 ? 'fin' : 'ini'])); }, false); })();

And we're now switching between the two shapes on click:

See the Pen by thebabydino (@thebabydino) on CodePen.

Morphing from one shape to another

What we really want however is not an abrupt change from one shape to another, but a gradual one. So we use the interpolation techniques explained in the previous article to achieve this.

We first decide on a total number of frames for our transition (NF) and choose the kind of timing functions we want to use - an ease-in-out type of function for transitioning the path shape from star to heart, a bounce-ini-fin type of function for the rotation angle and an ease-out one for the fill. We only include these, though we could later add others in case we change our mind and want to explore other options as well.

/* same as before */ const NF = 50, TFN = { 'ease-out': function(k) { return 1 - Math.pow(1 - k, 1.675) }, 'ease-in-out': function(k) { return .5*(Math.sin((k - .5)*Math.PI) + 1) }, 'bounce-ini-fin': function(k, s = -.65*Math.PI, e = -s) { return (Math.sin(k*(e - s) + s) - Math.sin(s))/(Math.sin(e) - Math.sin(s)) } };

We then specify which of these timing functions we use for each property we transition:

(function init() { /* same as before */ O.d = { /* same as before */ tfn: 'ease-in-out' }; O.transform = { /* same as before */ tfn: 'bounce-ini-fin' }; O.fill = { /* same as before */ tfn: 'ease-out' }; /* same as before */ })();

We move on to adding request ID (rID) and current frame (cf) variables, an update() function we first call on click, then on every refresh of the display until the transition finishes and we call a stopAni() function to exit this animation loop. Within the update() function, we... well, update the current frame cf, compute a progress k and decide whether we've reached the end of the transition and we need to exit the animation loop or we carry on.

We also add a multiplier m variable which we use so that we don't reverse the timing functions when we go from the final state (heart) back to the initial one (star).

let rID = null, cf = 0, m; function stopAni() { cancelAnimationFrame(rID); rID = null; }; function update() { cf += dir; let k = cf/NF; if(!(cf%NF)) { stopAni(); return } rID = requestAnimationFrame(update) };

Then we need to change what we do on click:

addEventListener('click', e => { if(rID) stopAni(); dir *= -1; m = .5*(1 - dir); update(); }, false);

Within the update() function, we want to set the attributes we transition to some intermediate values (depending on the progress k). As seen in the previous article, it's good to have the ranges between the final and initial values precomputed at the beginning, before even setting the listener, so that's our next step: creating a function that computes the range between numbers, whether as such or in arrays, no matter how deep and then using this function to set the ranges for the properties we want to transition.

function range(ini, fin) { return typeof ini == 'number' ? fin - ini : ini.map((c, i) => range(ini[i], fin[i])) }; (function init() { /* same as before */ for(let p in O) { O[p].rng = range(O[p].ini, O[p].fin); _SHAPE.setAttribute(p, O[p].afn(O[p].ini)); } /* same as before */ })();

Now all that's left to do is the interpolation part in the update() function. Using a loop, we go through all the attributes we want to smoothly change from one end state to the other. Within this loop, we set their current value to the one we get as the result of an interpolation function which depends on the initial value(s), range(s) of the current attribute (ini and rng), on the timing function we use (tfn) and on the progress (k):

function update() { /* same as before */ for(let p in O) { let c = O[p]; _SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k))); } /* same as before */ };

The last step is to write this interpolation function. It's pretty similar to the one that gives us the range values:

function int(ini, rng, tfn, k) { return typeof ini == 'number' ? Math.round(ini + (m + dir*tfn(m + dir*k))*rng) : ini.map((c, i) => int(ini[i], rng[i], tfn, k)) };

This finally gives us a shape that morphs from star to heart on click and goes back to star on a second click!

See the Pen by thebabydino (@thebabydino) on CodePen.

It's almost what we wanted - there's still one tiny issue. For cyclic values like angle values, we don't want to go back by half a circle on the second click. Instead, we want to continue going in the same direction for another half circle. Adding this half circle from after the second click with the one traveled after the first click, we get a full circle so we're right back where we started.

We put this into code by adding an optional continuity property and tweaking the updating and interpolating functions a bit:

function int(ini, rng, tfn, k, cnt) { return typeof ini == 'number' ? Math.round(ini + cnt*(m + dir*tfn(m + dir*k))*rng) : ini.map((c, i) => int(ini[i], rng[i], tfn, k, cnt)) }; function update() { /* same as before */ for(let p in O) { let c = O[p]; _SHAPE.setAttribute(p, c.afn(int(c.ini, c.rng, TFN[c.tfn], k, c.cnt ? dir : 1))); } /* same as before */ }; (function init() { /* same as before */ O.transform = { ini: -180, fin: 0, afn: (ang) => fnStr('rotate', ang), tfn: 'bounce-ini-fin', cnt: 1 }; /* same as before */ })();

We now have the result we've been after: a shape that morphs from a golden star into a crimson heart and rotates clockwise by half a circle every time it goes from one state to the other:

See the Pen by thebabydino (@thebabydino) on CodePen.

Creating a Star to Heart Animation with SVG and Vanilla JavaScript is a post from CSS-Tricks

Apple’s Proposal for HTML Template Instantiation

Mon, 11/06/2017 - 15:45

I'm sure I don't have the expertise to understand the finer nuances of this, but I like the spirit:

The HTML5 specification defines the template element but doesn't provide a native mechanism to instantiate it with some parts of it substituted, conditionally included, or repeated based on JavaScript values — as popular JavaScript frameworks such as Ember.js and Angular allow. As a consequence, there are many incompatible template syntaxes and semantics to do substitution and conditionals within templates — making it hard for web developers to combine otherwise reusable components when they use different templating libraries.

Whilst previously we all decided to focus on shadow DOM and the custom-elements API first, we think the time is right — now that shadow DOM and custom-elements API have been shipping in Safari and Chrome and are in development in Firefox — to propose and standardize an API to instantiate HTML templates.

Let the frameworks compete on speed and developer convenience in other ways.

Direct Link to ArticlePermalink

Apple’s Proposal for HTML Template Instantiation is a post from CSS-Tricks

So you need to parse an email?

Fri, 11/03/2017 - 23:59

Say you have a website with users who have accounts. Those users email you sometimes. What if you could parse that email for more context about that user, their account, and what they might want?

There are email parsing services out there. For example, Zapier offers Parser, which is free, with the idea being that you use Zapier itself to interconnect that data with other apps.

You teach it about your emails and then get programatic access to those data bits.

mailparser.io is another service just for this.

Same deal, you send the emails to them, and from within that app you set up parsers and do all the processing you need to do.

That might not be exactly what you need.

Perhaps your goal in parsing an email is to extend the data available to you right in your email client.

Gmail is a pretty huge email client. I just noticed that they have released and official way to make "Gmail Add-ons":

Gmail add-ons are developed using Apps Script, a scripting language based on JavaScript that serves as a connective platform between Google products like Docs, Sheets, Drive, and Gmail. Every Gmail add-on has a corresponding Apps Script project where you define your add-on's appearance and behavior.

That might be just the ticket for those of you looking to get your hands on email data, do stuff with it, and have a UI for doing stuff right within Gmail. There is a marketplace to check out with existing apps. The Trello one seemed pretty compelling to me.


The contextual cards you create for your add-ons work for both web and mobile versions of Gmail. This means that you don't need to create separate web and mobile versions of the add-on—the same code works everywhere!

Personally, I make pretty heavy use of Front, which is like a shared team inbox superapp.

Front offers a plugin system as well, which adds your own custom panel right into the app itself and gives you all that programatic parsing stuff you need to get into emails (or tweets or whatnot).

We use it at CodePen to figure out who's emailing us (from the perspective of our own app) and show some contextual information about them, as well as provide some quick common actions that we might need.

Another thing to consider is how the emails are being generated at all. For example, do you offer customer support by just saying "email us at xxx@yyy.com", or do you have them fill out a form which generates an email? If it's a form, that's, in a sense, parsing an email before it's even sent, meaning it has structure and potentially programattic access to individual fields.

An example of that might be using a Wufoo form for your support and then using the API to access the data as needed. Maybe you can skip email parsing entirely.

So you need to parse an email? is a post from CSS-Tricks