CSS-Tricks

Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 8 hours 59 min ago

Implementing Push Notifications: Setting Up & Firebase

Tue, 08/22/2017 - 14:20

You know those the little notification windows that pop up in the top right (Mac) or bottom right (Windows) corner when, for example, a new article on our favorite blog or a new video on YouTube was uploaded? Those are push notifications.

Part of the magic of these notifications is that they can appear even when we're not currently on that website to give us that information (after you've approved it). On mobile devices, where supported, you can even close the browser and still get them.

Article Series:
  1. Setting Up & Firebase (You are here!)
  2. The Back End (Coming soon!)
Push notification on a Mac in Chrome

A notification consists of the browser logo so the user knows from which software it comes from, a title, the website URL it was sent from, a short description, and a custom icon.

We are going to explore how to implement push notifications. Since it relies on Service Workers, check out these starting points if you are not familiar with it or the general functionality of the Push API:

What we are going to create Preview of the our push notification demo website

To test out our notifications system, we are going to create a page with:

  • a subscribe button
  • a form to add posts
  • a list of all the previously published posts

A repo on Github with the complete code can be found here and a preview of the project:

View Demo Site

And a video of it working:

Gathering all the tools

You are free to choose the back-end system which suits you best. I went with Firebase since it offers a special API which makes implementing a push notification service relatively easy.

We need:

In this part, we'll only focus on the front end, including the Service Worker and manifest, but to use Firebase, you will also need to register and create a new project.

Implementing Subscription Logic HTML

We have a button to subscribe which gets enabled if 'serviceWorker' in navigator. Below that, a simple form and a list of posts:

<button id="push-button" disabled>Subscribe</button> <form action="#"> <input id="input-title"> <label for="input-title">Post Title</label> <button type="submit" id="add-post">Add Post</button> </form> <ul id="list"></ul> Implementing Firebase

To make use of Firebase, we need to implement some scripts.

<script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-app.js"></script> <script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-database.js"></script> <script src="https://www.gstatic.com/firebasejs/4.1.3/firebase-messaging.js"></script>

Now we can initialize Firebase using the credentials given under Project Settings → General. The sender ID can be found under Project Settings → Cloud Messaging. The settings are hidden behind the cog icon in the top left corner.

firebase.initializeApp({ apiKey: '<API KEY>', authDomain: '<PROJECT ID>.firebaseapp.com', databaseURL: 'https://<PROJECT ID>.firebaseio.com', projectId: '<PROJECT ID>', storageBucket: '<PROJECT ID>.appspot.com', messagingSenderId: '<SENDER ID>' }) Service Worker Registration

Firebase offers its own service worker setup by creating a file called `firebase-messaging-sw.js` which holds all the functionality to handle push notifications. But usually, you need your Service Worker to do more than just that. So with the useServiceWorker method we can tell Firebase to use our own `service-worker.js` file as well.

Now we can create a userToken and a isSubscribed variable which will be used later on.

const messaging = firebase.messaging(), database = firebase.database(), pushBtn = document.getElementById('push-button') let userToken = null, isSubscribed = false window.addEventListener('load', () => { if ('serviceWorker' in navigator) { navigator.serviceWorker.register('https://cdn.css-tricks.com/service-worker.js') .then(registration => { messaging.useServiceWorker(registration) initializePush() }) .catch(err => console.log('Service Worker Error', err)) } else { pushBtn.textContent = 'Push not supported.' } }) Initialize Push Setup

Notice the function initializePush() after the Service Worker registration. It checks if the current user is already subscribed by looking up a token in localStorage. If there is a token, it changes the button text and saves the token in a variable.

function initializePush() { userToken = localStorage.getItem('pushToken') isSubscribed = userToken !== null updateBtn() pushBtn.addEventListener('click', () => { pushBtn.disabled = true if (isSubscribed) return unsubscribeUser() return subscribeUser() }) }

Here we also handle the click event on the subscription button. We disable the button on click to avoid multiple triggers of it.

Update the Subscription Button

To reflect the current subscription state, we need to adjust the button's text and style. We can also check if the user did not allow push notifications when prompted.

function updateBtn() { if (Notification.permission === 'denied') { pushBtn.textContent = 'Subscription blocked' return } pushBtn.textContent = isSubscribed ? 'Unsubscribe' : 'Subscribe' pushBtn.disabled = false } Subscribe User

Let's say the user visits us for the first time in a modern browser, so he is not yet subscribed. Plus, Service Workers and Push API are supported. When he clicks the button, the subscribeUser() function is fired.

function subscribeUser() { messaging.requestPermission() .then(() => messaging.getToken()) .then(token => { updateSubscriptionOnServer(token) isSubscribed = true userToken = token localStorage.setItem('pushToken', token) updateBtn() }) .catch(err => console.log('Denied', err)) }

Here we ask permission to send push notifications to the user by writing messaging.requestPermission().

The browser asking permission to send push notifications.

If the user blocks this request, the button is adjusted the way we implemented it in the updateBtn() function. If the user allows this request, a new token is generated, saved in a variable as well as in localStorage. The token is being saved in our database by updateSubscriptionOnServer().

Save Subscription in our Database

If the user was already subscribed, we target the right database reference where we saved the tokens (in this case device_ids), look for the token the user already has provided before, and remove it.

Otherwise, we want to save the token. With .once('value'), we receive the key values and can check if the token is already there. This serves as second protection to the lookup in localStorage in initializePush() since the token might get deleted from there due to various reasons. We don't want the user to receive multiple notifications with the same content.

function updateSubscriptionOnServer(token) { if (isSubscribed) { return database.ref('device_ids') .equalTo(token) .on('child_added', snapshot => snapshot.ref.remove()) } database.ref('device_ids').once('value') .then(snapshots => { let deviceExists = false snapshots.forEach(childSnapshot => { if (childSnapshot.val() === token) { deviceExists = true return console.log('Device already registered.'); } }) if (!deviceExists) { console.log('Device subscribed'); return database.ref('device_ids').push(token) } }) } Unsubscribe User

If the user clicks the button after subscribing again, their token gets deleted. We reset our userToken and isSubscribed variables as well as remove the token from localStorage and update our button again.

function unsubscribeUser() { messaging.deleteToken(userToken) .then(() => { updateSubscriptionOnServer(userToken) isSubscribed = false userToken = null localStorage.removeItem('pushToken') updateBtn() }) .catch(err => console.log('Error unsubscribing', err)) }

To let the Service Worker know we use Firebase, we import the scripts into `service-worker.js` before anything else.

importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-app.js') importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-database.js') importScripts('https://www.gstatic.com/firebasejs/4.1.3/firebase-messaging.js')

We need to initialize Firebase again since the Service Worker cannot access the data inside our `main.js` file.

firebase.initializeApp({ apiKey: "<API KEY>", authDomain: "<PROJECT ID>.firebaseapp.com", databaseURL: "https://<PROJECT ID>.firebaseio.com", projectId: "<PROJECT ID>", storageBucket: "<PROJECT ID>.appspot.com", messagingSenderId: "<SENDER ID>" })

Below that we add all events around handling the notification window. In this example, we close the notification and open a website after clicking on it.

self.addEventListener('notificationclick', event => { event.notification.close() event.waitUntil( self.clients.openWindow('https://artofmyself.com') ) })

Another example would be synchronizing data in the background. Read Google's article about that.

Show Messages when on Site

When we are subscribed to notifications of new posts but are already visiting the blog at the same moment a new post is published, we don't receive a notification.

A way to solve this is by showing a different kind of message on the site itself like a little snackbar at the bottom.

To intercept the payload of the message, we call the onMessage method on Firebase Messaging.

The styling in this example uses Material Design Lite.

<div id="snackbar" class="mdl-js-snackbar mdl-snackbar"> <div class="mdl-snackbar__text"></div> <button class="mdl-snackbar__action" type="button"></button> </div> import 'material-design-lite' messaging.onMessage(payload => { const snackbarContainer = document.querySelector('#snackbar') let data = { message: payload.notification.title, timeout: 5000, actionHandler() { location.reload() }, actionText: 'Reload' } snackbarContainer.MaterialSnackbar.showSnackbar(data) }) Adding a Manifest

The last step for this part of the series is adding the Google Cloud Messaging Sender ID to the `manifest.json` file. This ID makes sure Firebase is allowed to send messages to our app. If you don't already have a manifest, create one and add the following. Do not change the value.

{ "gcm_sender_id": "103953800507" }

Now we are all set up on the front end. What's left is creating our actual database and the functions to watch database changes in the next article.

Article Series:
  1. Setting Up & Firebase (You are here!)
  2. The Back End (Coming soon!)

Implementing Push Notifications: Setting Up & Firebase is a post from CSS-Tricks

Be Slightly Careful with Sub Elements of Clickable Things

Tue, 08/22/2017 - 13:02

Say you want to attach a click handler to a <button>. You almost surely are, as outside of a <form>, buttons don't do anything without JavaScript. So you do that with something like this:

var button = document.querySelector("button"); button.addEventListener("click", function(e) { // button was clicked });

But that doesn't use event delegation at all.

Event delegation is where you bind the click handler not directly to the element itself, but to an element higher up the DOM tree. The idea being that you can rip out and plop in new DOM stuff inside of there and not worry about events being destroyed and needing to re-bind them.

Say our button has a gear icon in it:

<button> <svg> <use xlink:href="#gear"></use> </svg> </button>

And we bind it by watching for clicks way up on the document element itself:

document.documentElement.addEventListener("click", function(e) { });

How do we know if that click happened on the button or not? We have the target of the event for that:

document.documentElement.addEventListener("click", function(e) { console.log(e.target); });

This is where it gets tricky. In this example, even if the user clicks right on the button somewhere, depending on exactly where they click, e.target could be:

  • The button element
  • The svg element
  • The use element

So if you were hoping to be able to do something like this:

document.documentElement.addEventListener("click", function(e) { if (e.target.tagName === "BUTTON") { // may not work, because might be svg or use } });

Unfortunately, it's not going to be that easy. It doesn't matter if you check for classname or ID or whatever else, the element itself that you are expecting might just be wrong.

There is a pretty decent CSS fix for this... If we make sure nothing within the button has pointer-events, clicks inside the button will always be for the button itself:

button > * { pointer-events: none; }

This also prevents a situation where other JavaScript has prevented the event from bubbling up to the button itself (or higher).

document.querySelector("button > svg").addEventListener("click", function(e) { e.stopPropagation(); e.preventDefault(); }); document.querySelector("button").addEventListener("click", function() { // If the user clicked right on the SVG, // this will never fire });

Be Slightly Careful with Sub Elements of Clickable Things is a post from CSS-Tricks

Strongly Held Opinions, Gone Away

Mon, 08/21/2017 - 21:29

I received a really wonderful question from Bryan Braun the other day during a workshop I was giving at Sparkbox. He asked if, over the years, if there were opinions about web design and development I strongly held that I don't anymore.

I really didn't have a great answer at the time, even though surely if I could rewind my brain there would be some embarrassing ones in there.

At the risk of some heavy self-back-patting, this is exactly the reason I try and be pretty open-minded. If you aren't, you end up eating crow. And for what? When you crap on an idea, you sound like a jerk at the time, and likely cause more harm than good. If you end up right, you were still a jerk. If you end up wrong, you were a jerk and a fool.

I like the sentiment the web is a big place. It's a quick way of saying there are no hard and fast right answers in a playground this big with loose rules, diversity of everything, and economic overlords.

I don't want to completely punt on this question though.

I've heard Trent Walton say a number of times that, despite him being all-in on Responsive Web Design now, at first it seemed like a very bad idea to him.

I remember feeling very late to the CSS preprocessing world, because I spent years rolling my eyes at it. I thought it the result of back end nerds sticking their noses into something and bringing programming somewhere that didn't need it. Looking back, it was mostly me being afraid to learn the tools needed to bring it into a workflow.

It's not to find industry-wide holy wars these days, where strongly held opinions duke it over time, and probably end up giving ground to each other in the end.

But what of those internal personal battles? I'd be very interested to hear people's answers on this...

What strongly-help opinion did you used to have about web design and development, but not anymore?

Strongly Held Opinions, Gone Away is a post from CSS-Tricks

Double Opt-In Email Intros

Mon, 08/21/2017 - 14:19

You know those those "introduction" emails? Someone thinks you should meet someone else, and emails happen about it. Or it's you doing the introducing, either by request or because you think it's a good idea. Cutting to the chase here, those emails could be done better. Eight years ago, Fred Wilson coined the term "double opt-in intro".

This is how it can work.

You're doing the vetting

Since you're writing the emails here, it's your reputation at stake here. If you do an introduction that is obnoxious for either side, they'll remember. Make sure you're introducing people that you really do think should know each other. Like a bizdev cupid.

You're gonna do two (or three) times writing

The bad way to do an intro is to email both people at once. Even if this introduction has passed your vetting, you have no idea how it's going to turn out. There is a decent chance either of them or both aren't particularly interested in this, which makes you look like a dolt. It doesn't respect either of their time, puts your reputation at risk, and immediately puts everyone into an awkward position (if they ignore it they look like an asshole).

Instead, you're going to write two emails, one to each person you're trying to introduce. And you're not going to reveal who the other person is, except with non-identifying relevant details and your endorsement.

They do the opt-ing in

If either of the folks are interested in this introduction, they can email you back. Give them an easy out though, I'd say something like "if for any reason you aren't into it, just tell me so or ignore this, I promise I understand". If you don't make it easy to blow you off, it's your just transferring the awkward situation to yourself.

If either of them isn't into it, it doesn't matter. They don't know who the other is and there is no awkwardness or burnt bridge.

If both are into it, great, now it's time for the third email actually introducing them. Get out of the way quickly.

It's about more than awkwardness and reputation, it's about saftey

See:

It's also why double opt-in intros are *a must*. Please please please don't go intro'ing people to each other without asking first.

— Lara Hogan (@lara_hogan) August 5, 2017

Just because you have someone's email address in your book doesn't mean you should be giving it out to anyone that asks. Better to just assume any contact info you have for someone else is extremely private and only to be shared with their permission.

Double Opt-In Email Intros is a post from CSS-Tricks

Pattern Library Workflow

Fri, 08/18/2017 - 15:55

Jon Gunnison documents some things that have made pattern libraries successful at Allstate. Tidbits I found interesting:

  • There are specific jobs (part of what he calls "governance") for maintaining the library. I love that they are called librarians. A "designer librarian" and a "UI dev librarian".
  • Acknowledgment that there are "snowflakes", or single instances that don't fit into a pattern (at least right now).
  • The pattern library is fed by information that comes in from lots of different places. Hence, requiring librarians to triage.

Direct Link to ArticlePermalink

Pattern Library Workflow is a post from CSS-Tricks

Using Custom Properties to Modify Components

Fri, 08/18/2017 - 15:07

Instead of using custom properties to style whole portions of a website’s interface I think we should use them to customize and modify tiny components. Here’s why.

Whenever anyone mentions CSS custom properties they often talk about the ability to theme a website’s interface in one fell swoop. For example, if you’re working at somewhere like a big news org then we might want to specify a distinct visual design for the Finance section and the Sports section – buttons, headers, pull quotes and text color could all change on the fly.

Custom properties would make this sort of theming easy because we won’t have to add a whole bunch of classes to each component. All we’d have to do is edit a single variable that’s in the :root, plus we can then edit those custom props with JavaScript which is something we can’t do with something like Sass variables.

A while back Chris wrote about this use case in a post about custom properties and theming and the example he gave looked like this:

:root { --mainColor: #5eb5ff; } header { background: var(--mainColor); } footer { background: var(--mainColor); }

See the Pen Theming a site with CSS Custom Properties by Chris Coyier (@chriscoyier) on CodePen.

But the more I learn about building big ol’ systems with CSS, the more I think that changing global styles like this is really difficult to keep the code clean and consistent over the long haul. And if you’re working on any large web app then you’re probably using something like React where everything is made of tiny reusable components anyway, especially because at this scale the cascade can be scary and dangerous.

If we’re working on larger, more complex systems then how should we be using custom properties then? Well I think the best option is to keep them on the component level to help make our CSS really clean. So instead of adding variables to the root element we could bind them to the component instead, like this:

.btn { --btnColor: #5eb5ff; }

After which we could set properties such as color or border to use this variable:

.btn { --btnColor: #5eb5ff; border: 1px solid var(--btnColor); color: var(--btnColor); &:hover { color: white; background-color: var(--btnColor); } }

So far so good! We can then add modifier classes that simply change the value of the custom property:

.btn-red { --btnColor: #ff6969; } .btn-green { --btnColor: #7ae07a; } .btn-gray { --btnColor: #555; }

See the Pen Custom Properties by Robin Rendle (@robinrendle) on CodePen.

See how nice and tidy that is? With just a few lines of CSS we’ve created a whole system of buttons – we could easily change the font-size or add animations or anything else and keep our classes nice and small without messing with the global scope of our CSS. Especially since all this code is likely to live in a single file like buttons.scss it’s helpful that all the logic exists in one place.

Anyway, for sure this method of using custom properties on a component level isn’t as exciting or stylish as using a single variable to style every part of a website but I’m not entirely sure how useful that sort of theming is anyway. A lot of the time a design will require a bunch of tiny adjustments to each component for usability reasons so it makes sense to break everything down to the component level.

What do you think is the most useful thing about custom properties? I’d love to hear what everyone thinks about this stuff in the comments below.

Using Custom Properties to Modify Components is a post from CSS-Tricks

Saving SVG with Space Around It from Illustrator

Fri, 08/18/2017 - 14:43

Say you have a graphic like this in Adobe Illustrator:

Note how the art doesn't touch the edges of the artboard. Say you want that space around it, and you want to save it as SVG for use on the web.

Nope: Save for Web

THE CLAW! You'll see space around here, but unfortunately the classic Save for Web dialog doesn't export as SVG at all, so that's not really an option.

They are already calling this a "legacy" feature, so I imagine it'll be gone soon.

Nope: Export As

The "Export As" feature supports SVG, and you'll likely be pretty pleased with the output. It's fairly optimized, cruft-free, and pretty much ready to use on the web.

But... it crops to the art with no option to change that, so we'll lose the space around that we're shooting for here.

A possible work around here is putting a rectangle behind the art with the spacing around it we need, but then we get a rectangle in the output, which shouldn't be necessary.

Nope: Asset Export

The Asset Export panel is mighty handy, but you the export crops to the art and there is no way to change that.

Yep: Export for Screens

The trick in preserving the space is to export the artboard itself. You can do that from the Export for Screens dialog.

The viewBox will then reflect the artboard and the space we have left around the art. That's what we were aiming for, so I'm glad there is a way!

Saving SVG with Space Around It from Illustrator is a post from CSS-Tricks

Visual Email Builder Apps

Thu, 08/17/2017 - 11:33

I bet y'all know that apps like Campaign Monitor and MailChimp have visual email builders built right into them. You drag and drop different types of content right into a layout. You edit text right within the email. It's nice. It's a lot nicer than editing the quagmire of HTML underneath, anyway!

But not everybody needs all the rest of the features that those apps bring, like list management and the actual sending of the email. Perhaps you have an app that already handles that kind of thing. You just need to design some emails, get the final HTML, and use it in your own app.

When I was looking around at email tooling, I was surprised there were a good number of apps that help just with the visual email building. Very cool.

Toptol BEE free EDMdesigner RED (Responsive Email Designer) Taxi for Email

I haven't used any of them extensively enough to make a firm recommendation, but I've been dabbling and I like that they exist and that there are options.

Visual Email Builder Apps is a post from CSS-Tricks

Oxygen – The WordPress Visual Site Builder for Real Designers?

Thu, 08/17/2017 - 10:55

WordPress page builders are generally shunned by those who know how to code. They are generally bloated and slow. And you are offered very limited customization options. But what if there was a visual site builder meant for advanced, professional website designers?

It turns out there is! It's called Oxygen, and it's quickly becoming the tool of choice for WordPress web designers.

Notice that with Oxygen, you design your entire site - content, headers, footers, menus, etc. It totally replaces your WordPress theme.

All pages are constructed from fundamental HTML elements - section, div, h1...6, p, span, a, img, and a few more. Then, you visually edit CSS properties to get everything looking the way you want.

So unlike a typical page builder, you can design anything. It's like hand-coding, but visually. Think Webflow, but for WordPress.

To integrate with WordPress and design layouts for posts, custom post types, archives, etc. Oxygen has a robust templating system. Basically, it replaces the WordPress template hierarchy with a visual system to apply templates.

Then you can write PHP code inside Oxygen's interface and call WP API functions, run the WordPress loop, etc.

There are really no limits to what you can do with Oxygen. It is far and ahead more powerful than any other WordPress page building tool available. Other than hand-coding your WordPress theme, there's nothing that I've ever seen gives you flexibility like this.

This might be the future, so check it out! You will be pleasantly surprised.

Direct Link to ArticlePermalink

Oxygen – The WordPress Visual Site Builder for Real Designers? is a post from CSS-Tricks

Using the Paint Timing API

Wed, 08/16/2017 - 12:56

It's a great time to be a web performance aficionado, and the arrival of the Paint Timing API in Chrome 60 is proof positive of that fact. The Paint Timing API is yet another addition to the burgeoning Performance API, but instead of capturing page and resource timings, this new and experimental API allows you to capture metrics on when a page begins painting.

If you haven't experimented with any of the various performance APIs, it may help if you brush up a bit on them, as the syntax of this API has much in common with those APIs (particularly the Resource Timing API). That said, you can read on and get something out of this article even if you don't. Before we dive in, however, let's talk about painting and the specific timings this API collects.

Why do we need an API for measuring paint times?

If you're reading this, you're likely familiar with what painting is. If not, it's a simple concept to grasp: Painting is any activity by the browser that involves drawing pixels to the browser window. It's a crucial part of the rendering pipeline. When we talk about painting in performance parlance, we're often referring to the time at which the browser begins to paint a page as it loads. This moment is appropriately called "time to first paint".

Why is this metric important to know? Because it signifies to us the earliest possible point at which something appears after a user requests a page. A lot goes on as a page is loading, but one thing we know is that the sooner we can get something to appear for the user, the sooner they'll realize that something is happening. Sort of like your LDL cholesterol, most performance-oriented goals involve lowering your numbers. Until you know what your numbers are to begin with, though, reaching those goals can be an exercise in futility.

Thankfully, this is where the Paint Timing API can help us out. This API allows you to capture how fast a page is painting for your site's visitors using JavaScript. Synthetic testing in programs such as Lighthouse or sitespeed.io is great in that it gives us a baseline to work with for improving the performance of sites in our care, but all of that testing is in a vacuum. It doesn't tell you how your site is performing for those who actually use it.

Compared to similar performance APIs, the Paint Timing API is much more simplified. It provides us with only two metrics:

first-paint: This is likely what you think it is. The point at whicch the browser has painted the first pixel on the page. It may look something like this:

What `first-paint` might look like.

first-contentful-paint: This is a bit different than first-paint in that it captures the time at which the first bit of content is painted, be it text, an image, or whatever isn't some variation of non-contentful styling. That scenario may look something like this:

What a `first-contentful-paint` event might look like.

It's important to point out that these two points in time may not always be so distinct from one another. Depending on the client-side architecture of a given website, first-paint and first-contentful-paint metrics may not differ. Where faster and lighter web experiences are concerned, they'll often be nearly (or even exactly) the same thing. On larger sites where client side architecture involves a lot of assets (and/or when connections are slower), these two metrics may occur further apart.

In any case, let's get an eye on how to use this API, which has landed in Chrome 60.

A straightforward use case

There are a couple ways you can use this API. The easiest way is to attach the code to an event that occurs some time after the first paint. The reason you might want to attach this to an event instead of running it immediately is so the metrics are actually available when you attempt to pull them from the API. Take this code for example:

if("performance" in window){ window.addEventListener("load", ()=>{ let paintMetrics = performance.getEntriesByType("paint"); if(paintMetrics !== undefined && paintMetrics.length > 0){ paintMetrics.forEach((paintMetric)=>{ console.log(`${paintMetric.name}: ${paintMetric.startTime}`); }); } }); }

This code does the following:

  1. We do a simple check to see if the performance object is in the window object. This prevents any of our code from running if performance is unavailable.
  2. We attach code using addEventListener to the window object's load event, which will fire when the page and its assets are fully loaded.
  3. In the load event code, we use the performance object's getEntriesByType method to retrieve all event types of "paint" to a variable called paintMetrics.
  4. Because only Chrome 60 (and later) currently implements the paint timing API, we need to check if any entries were returned. To do this, we check if paintMetrics is undefined and if its length is greater than 0.
  5. If we've made it past those checks, we then output the name of the metric and its start time to the console, which will look something like this:
Paint timings exposed in the console.

The timings you see in the console screenshot above are in milliseconds. From here, you can send these metrics someplace to be stored and analyzed for later.

This works great and all, but what if we want to have access to these metrics as soon as the browser collects them? For that, we'll need PerformanceObserver.

Capturing paint metrics with PerformanceObserver

If you absolutely, positively need to access timings as soon as they're available in the browser, you can use PerformanceObserver. Using PerformanceObserver can be tricky, especially if you want to make sure you're not breaking behavior for browsers that don't support it, or if browsers do support it, but don't support "paint" events. This latter scenario is pertinent to our efforts here because polling for unsupported events can throw a TypeError.

Because PerformanceObserver gathers metrics and logs them asynchronously, our best bet is to use a promise, which helps us handle async'y stuff without the callback hell of yesteryear. Take this code, for example:

if("PerformanceObserver" in window){ let observerPromise = new Promise((resolve, reject)=>{ let observer = new PerformanceObserver((list)=>{ resolve(list); }); observer.observe({ entryTypes: ["paint"] }); }).then((list)=>{ list.getEntries().forEach((entry)=>{ console.log(`${entry.name}: ${entry.startTime}`); }); }).catch((error)=>{ console.warn(error); }); }

Let's walk through this code:

  1. We check for the existence of the PerformanceObserver object in window. If PerformanceObserver doesn't exist, nothing happens.
  2. A Promise is created. In the first part of the promise chain, we create a new PerformanceObserver object and store it in the observer variable. This observer contains a callback that resolves the promise with a list of paint timings.
  3. We have to get those paint timings from somewhere, right? That's where the observer method kicks in. This method lets us define what types of performance entries we want. Since we want painting events, we just pass in an array with an entry type of "paint".
  4. If the browser supports gathering "paint" events with PerformanceObserver, the promise will resolve and the next part of the chain kicks in where we then have access to the entries through the list variable's getEntries method. This will produce console output much like the previous example.
  5. If the current browser doesn't support gathering "paint" events with PerformanceObserver, the catch method provides access to the error message. From here, we can do whatever we want with this information.

Now you have a way to gather metrics asynchronously, instead of having to wait for the page to load. I personally prefer the previous method, as the code is more terse and readable (to me, anyway). I'm sure my methods aren't the most robust, but they are illustrative of the fact that you can gather paint timings in the browser in a predictable way that shouldn't throw errors in older browsers.

What would I use this for?

Depends on what you're after. Maybe you want to see just how fast your site is rendering for real users out in the wild. Maybe you want to gather data for research. At the time of writing, I'm conducting a image quality research project that gauges participants on how they perceive lossy image quality of JPEGs and WebP images. As part of my research, I use other timing APIs to gather performance-related information, but I'm also gathering paint timings. I don't know if this data will prove useful, but collecting and analyzing it in tandem with other metrics may be helpful to my findings. However, you use this data is really up to you. In my humble opinion, I think it's great that this API exists, and I hope more browsers move to implement it soon.

Some other stuff you might want to read

Reading this short piece might have gotten you interested in some other pieces of the broader performance interface. Here's a few articles for you to check out if your curiosity has been sufficiently piqued:

  • The surface of this API is shared with the established Resource Timing API, so you should brush up on that. If you feel comfortable with the code in the article, you'll be able to immediately benefit from this incredibly valuable API.
  • While this API doesn't share much of a surface with the Navigation Timing API, you really ought to read up on it. This API allows you to collect timing data on how fast the HTML itself is loading.
  • PerformanceObserver has a whole lot more to it than what I've illustrated here. You can use it to get resource timings and user timings. Read up on it here.
  • Speaking of user timings, there's an API for that. With this API, you can measure how long specific JavaScript tasks are taking using highly accurate timestamps. You could also use this tool to measure latency in how users interact with the page.

Now that you've gotten your hands dirty with this API, head out and see what it (and other APIs) can do for you in your quest to make the web faster for users!

Jeremy Wagner is the author of Web Performance in Action, available now from Manning Publications. Use promo code sswagner to save 42%.

Check him out on Twitter: @malchata

Using the Paint Timing API is a post from CSS-Tricks

A Poll About Pattern Libraries and Hiring

Tue, 08/15/2017 - 13:49

I was asked (by this fella on Twitter) a question about design patterns. It has an interesting twist though, related to hiring, which I hope makes for a good poll.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

I'll let this run for a week or two. Then (probably) instead of writing a new post with the results, I'll update this one with the results. Feel free to comment with the reasoning for your vote.

A Poll About Pattern Libraries and Hiring is a post from CSS-Tricks

(An Interview About) imgix Page Weight

Tue, 08/15/2017 - 13:41

Imgix has been a long-time display ad sponsor here on CSS-Tricks. This post is not technically sponsored, I just noticed that they released a tool for analyzing image performance at any given URL that is pretty interesting.

We know web performance is a big deal. We know that images are perhaps the largest offender in ballooning page weights across the web. We know we have tools for looking at page performance as a whole. It seems fairly new to me to have tools for specifically analyzing and demonstrating how we could have done better with images specifically. That's what this Page Weight tool is.

Clearly this is a marketing tool for them. You put in a URL, and it tells you how you could have done better, and specifically how imgix can help do that. I'm generally a fan of that. Tools with businesses behind them have the resources and motivation to stick around and get better. But as ever, something to be aware of.

I asked Brit Morgan some questions about it.

As we can see checking out the homepage for Page Weight, you drop in a URL, it analyzes all the images and gives you some information about how much more performant they could have been. What's going on behind the scenes there?

We run each image on the page through imgix, resizing to fit the image container as best we can tell, and also transform file formats, color palettes, and quality breakpoints to determine which combination provides the best size savings. Then we display that version for each image.

I see it suggests fitting the image to the container, but that only makes sense for 1x displays right? Images need to be larger than their display pixel size for visual crispness on high-density display.

Definitely. The Page Weight tool does not currently address high-DPR display differences, but our service does. We offer automated high-DPR support via Client Hints, and manually via our dpr parameter, which allows developers to set the desired value directly (useful on its own or as a fallback for Client Hint support in browsers that don't yet support that functionality). Our imgix.js front-end library also generates a comprehensive srcset (based on the defined sizes) to address whatever size/DPR the device requires.

I think most developers here are smart enough to realize this is really smart marketing for imgix. But also smart enough to realize the images are a huge deal in web performance and want to do better. What can imgix do that a developer on their own can't do? Or that is fairly impractical for a developer to do on their own?

First, it is important to note that resizing is not the only thing that imgix does, although it is a very common use case. We provide over 100 different processing parameters that enable developers to do everything from context-aware cropping to color space handling to image compositing. So adopting imgix gives a developer access to a lot of image handling flexibility without a lot of hassle, even if they’re primarily using it to support responsive design.

That said, it is not impossible to get a very simple resizing solution running on your own, and many developers start out there. Usually, this takes the form of some kind of batch script based on ImageMagick or Pillow or some other image manipulation library that creates derivative images for the different breakpoints.

For a while, that's often sufficient. But once your image library gets beyond a few hundred images, batch-based systems begin to break down in various ways. Visibility, error handling, image catalog cleaning, and adding support for new formats and devices are all things that get much harder at scale. Very large sites and sites where content changes constantly will often end up spending significant dev time on these kinds of maintenance tasks.

So really, "could you build this?" is a less useful question than "should you build this?" In other words, is image processing central enough to the value proposition of what you're building that you're willing to spend time and effort maintaining your own system to handle it? Usually, the answer is no. Most developers would rather focus on the what's important and leave images to something like imgix — a robust, scaleable system that just works.

Does the tool look at responsive images syntax in HTML? As in, which image was actually downloaded according to the srcset/sizes or picture element rules?

Not yet. That's a feature we're hoping to implement in the next version of the tool.

Can you share implementations of imgix that are particularly impressive or creative?

An interesting use we see more and more is image processing for social media. These days, many sites see the majority of their traffic coming in through social, which makes it more important than ever to make content look good in the feed. Setting OpenGraph tags is a start, but every social network has a different container size. This creates a similar problem to the one posed by mobile fragmentation, and we can help by dynamically generating social images for each network. This provides a polished presentation without adding a ton of overhead for the person maintaining the site.

Other customers are pushing even further by combining several images to create a custom presentation for social. HomeChef, a meal delivery service, does this to dynamically create polished, branded images for Pinterest from their ingredient photos.

We actually created an open source tool called Motif (GitHub Repo) to make it easier for developers to get started with dynamically generating social images through imgix.

(An Interview About) imgix Page Weight is a post from CSS-Tricks

Using ES2017 Async Functions

Mon, 08/14/2017 - 12:23

ES2017 was finalized in June, and with it came wide support for my new favorite JavaScript feature: async functions! If you've ever struggled with reasoning about asynchronous JavaScript, this is for you. If you haven't, then, well, you're probably a super-genius.

Async functions more or less let you write sequenced JavaScript code, without wrapping all your logic in callbacks, generators, or promises. Consider this:

function logger() { let data = fetch('http://sampleapi.com/posts') console.log(data) } logger()

This code doesn't do what you expect. If you've built anything in JS, you probably know why.

But this code does do what you'd expect.

async function logger() { let data = await fetch('http:sampleapi.com/posts') console.log(data) } logger()

That intuitive (and pretty) code works, and its only two additional words!

Async JavaScript before ES6

Before we dive into async and await, it's important that you understand promises. And to appreciate promises, we need go back one more step to just plain ol' callbacks.

Promises were introduced in ES6, and made great improvements to writing asynchronous code in JavaScript. No more "callback hell", as it is sometimes affectionately referred to.

A callback is a function that can be passed into a function and called within that function as a response to any event. It's fundamental to JS.

function readFile('file.txt', (data) => { // This is inside the callback function console.log(data) }

That function is simply logging the data from a file, which isn't possible until the file is finished being read. It seems simple, but what if you wanted to read and log five different files in sequence?

Before promises, in order to execute sequential tasks, you would need to nest callbacks, like so:

// This is officially callback hell function combineFiles(file1, file2, file3, printFileCallBack) { let newFileText = '' readFile(string1, (text) => { newFileText += text readFile(string2, (text) => { newFileText += text readFile(string3, (text) => { newFileText += text printFileCallBack(newFileText) } } } }

It hard to reason about and difficult to follow. This doesn't even include error handling for the entirely possible scenario that one of the files doesn't exist.

I Promise it gets better (get it?!)

This is where a Promise can help. A Promise is a way to reason about data that doesn't yet exist, but you know it will. Kyle Simpson, author of You Don't Know JS series, is well known for giving async JavaScript talks. His explanation of promises from this talk is spot on: It's like ordering food a fast-food restaurant.

  1. Order your food.
  2. Pay for your food and receive a ticket with an order number.
  3. Wait for your food.
  4. When your food is ready, they call your ticket number.
  5. Receive the food.

As he points out, you may not be able to eat your food while you're waiting for it, but you can think about it, and you can prepare for it. You can proceed with your day knowing that food is going to come, even if you don't have it yet, because the food has been "promised" to you. That's all a Promise is. An object that represents data that will eventually exist.

readFile(file1) .then((file1-data) => { /* do something */ }) .then((previous-promise-data) => { /* do the next thing */ }) .catch( /* handle errors */ )

That's the promise syntax. Its main benefit is that it allows an intuitive way to chain together sequential events. This basic example is alright, but you can see that we're still using callbacks. Promises are just thin wrappers on callbacks that make it a bit more intuitive.

The (new) Best Way: Async / Await

A couple years ago, async functions made their way into the JavaScript ecosystem. As of last month, its an official feature of the language and widely supported.

The async and await keywords are a thin wrapper built on promises and generators. Essentially, it allows us to "pause" our function anywhere we want, using the await keyword.

async function logger() { // pause until fetch returns let data = await fetch('http://sampleapi.com/posts') console.log(data) }

This code runs and does what you'd want. It logs the data from the API call. If your brain didn't just explode, I don't know how to please you.

The benefit to this is that it's intuitive. You write code the way your brain thinks about it, telling the script to pause where it needs to.

The other advantages are that you can use try and catch in a way that we couldn't with promises:

async function logger () { try { let user_id = await fetch('/api/users/username') let posts = await fetch('/api/`${user_id}`') let object = JSON.parse(user.posts.toString()) console.log(posts) } catch (error) { console.error('Error:', error) } }

This is a contrived example, but it proves a point: catch will catch the error that occurs in any step during the process. There are at least 3 places that the try block could fail, making this by far the cleanest way to handle errors in async code.

We can also use async functions with loops and conditionals without much of a headache:

async function count() { let counter = 1 for (let i = 0; i < 100; i++) { counter += 1 console.log(counter) await sleep(1000) } }

This is a silly example, but that will run how you'd expect and it's easy to read. If you run this in the console, you'll see that the code will pause on the sleep call, and the next loop iteration won't start for one second.

The Nitty Gritty

Now that you're convinced of the beauty of async and await, lets dive into the details:

  • async and await are built on promises. A function that uses async will always itself return a promise. This is important to keep in mind, and probably the biggest "gotcha" you'll run into.
  • When we await, it pauses the function, not the entire code.
  • async and await are non-blocking.
  • You can still use Promise helpers such as Promise.all(). Here's our earlier example: async function logPosts () { try { let user_id = await fetch('/api/users/username') let post_ids = await fetch('/api/posts/<code>${user_id}') let promises = post_ids.map(post_id => { return fetch('/api/posts/${post_id}') } let posts = await Promise.all(promises) console.log(posts) } catch (error) { console.error('Error:', error) } }
  • Await can only be used in functions that have been declared Async.
  • Therefore, you can't use await in the global scope. // throws an error function logger (callBack) { console.log(await callBack) } // works! async function logger () { console.log(await callBack) }
Available now!

The async and await keywords are available in almost every browser as of June 2017. Even better, to ensure your code works everywhere, use Babel to preprocess your JavaScript into and older syntax that older browsers do support.

If you're interested in more of what ES2017 has to offer, you can see a full list of ES2017 features here.

Using ES2017 Async Functions is a post from CSS-Tricks

Long Distance

Mon, 08/14/2017 - 12:23

A podcast (turns out to be a 2-parter) from Reply All in which Alex Goldman gets a scam phone call about his iCloud account being compromised. He goes pretty far into investigating it, speaking regularly with the people who run these scams.

Especially resonant for me, as someone who also spoke directly with a hacker who's goal was doing me harm. I've long struggled with thinking rationally about stuff like this.

Direct Link to ArticlePermalink

Long Distance is a post from CSS-Tricks

Crafting Webfont Fallbacks

Sun, 08/13/2017 - 23:25

There is a great bit in here where Glen uses Font Style Matcher to create some CSS for a fallback font that has font-size, line-height, font-weight, letter-spacing, and word-spacing adjusted so perfectly that when the web font does load, the page hardly shifts at all. Like barely noticeable FOUT. Maybe we'll call it FOCST (Flash of Carefully Styled Text).

Direct Link to ArticlePermalink

Crafting Webfont Fallbacks is a post from CSS-Tricks

How do you start a sentence with “npm”?

Sat, 08/12/2017 - 14:21

This npm. Asking this question was a fun little journey.

Right on the npm website, the very first sentence starts with "npm", and they do not capitalize it.

That's a pretty good precedent for not capitalizing it. It certainly looks awkward though, which is why I asked the question to begin with. It doesn't feel right to me to start a sentence that way, and I'm sure other some other people would look at it and see a mistake.

Their own documentation forbids capitalization as well:

Straight from Raquel Vélez, an employee:

always npm, even if starting the sentence.

(this is a common question we get a lot :-) )

— Raquel Vélez (@rockbot) August 11, 2017

But!

We don't have to.

Brand name capitalization is always at the discretion of the editor or style guide, so brands like WIRED or LEGO do not have to be capped

— Karen McGrane (@karenmcgrane) August 11, 2017

If you're following the Chicago Manual of Style, they would say:

This makes life difficult, however, for those of us who cannot bear to begin a sentence with a lowercase letter. CMOS forbids so doing (except for names like eBay)—we advise you to rewrite. Some publications simply ignore the preference.

Emphasis mine.

"Rewriting", as in, find a way not to start the sentence with the preferred-lowercase initialism.

Using npm …
The npm package manager …
Thanks to npm …
Anything to keep it capitalized as intended while not breaking basic capitalization &#x1f605;

— Nicolás Bevacqua (@nzgb) August 11, 2017

This advice holds true for other situations/companies as well:

avoid: He said that EBay is where he bought his IPod.
instead, use: He said that eBay is where he bought his iPod.

avoid: eBay is where he bought his iPod.
instead, use: He bought his iPod on eBay.

Or just burn it all down

this is why i don't capitalize ever

— jennmoneydollars (@jennschiffer) August 11, 2017

How do you start a sentence with “npm”? is a post from CSS-Tricks

More CSS Charts, with Grid & Custom Properties

Fri, 08/11/2017 - 17:54

I loved Robin's recent post, experimenting with CSS Grid for bar-charts. I've actually been using a similar approach on a client project, building a day-planner with CSS Grid. It's a different use-case, but the same basic technique: using grid layouts to visualize data.

(I recommend reading Robin's article first, since I'm building on top of his chart.)

Robin's approach relies on a large Sass loop to generate 100 potential class-names, even though less than 12 are used in the final chart. In production we'll want something more direct and performant, with better semantics, so I turned to definition lists and CSS Variables (aka Custom Properties) to build my charts.

Here's the final result:

See the Pen Bar chart in CSS grid + variables by Miriam Suzanne (@mirisuzanne) on CodePen.

Let's dig into it!

Markup First

Robin was proposing a conceptual experiment, so he left out many real-life data and accessibility concerns. Since I'm aiming for (fictional) production code, I want to make sure it will be semantic and accessible. I borrowed the year-axis idea from a comment on Robin's charts, and moved everything into a definition list. Each year is associated with a corresponding percentage in the list:

<dl class="chart"> <dt class="date">2000</dt> <dd class="bar">45%</dd> <dt class="date">2001</dt> <dd class="bar">100%</dd> <!-- etc… --> </dl>

There are likely other ways to mark this up accessibly, but a dl seemed clean and clear to me – with all the data and associated pairs available as structured text. By default, this displays year/percentage pairs in a readable format. Now we have to make it beautiful.

Grid Setup

I started from Robin's grid, but my markup requires an extra row for the .date elements. I add that to the end of my grid-template-rows, and place my date/bar elements appropriately:

.chart { display: grid; grid-auto-columns: 1fr; grid-template-rows: repeat(100, 1fr) 1.4rem; grid-column-gap: 5px; } .date { /* fill the bottom row */ grid-row-start: -2; } .bar { /* end before the bottom row */ grid-row-end: -2; }

Normally, I would use auto for that final row, but I needed an explicit height to make the background-grid work properly. Not not worth the trade-off, probably, but I was having fun.

Passing Data to CSS

At this point, CSS has no access to the relevant numbers for styling a chart. We have no hooks for setting individual bars to different heights. Robin's solution involves individual class-names for every bar-value, with a Sass to loop to create custom classes for each value. That works, but we end up with a long list of classes we may never use. Is there a way to pass data into CSS more directly?

The most direct approach might be an inline style:

<dd class="bar" style="grid-row-start: 56">45%</dd>

The start position is the full number of grid lines (one more than the number of rows, or 101 in this case), minus the total value of the given bar: 101 - 45 = 56. That works fine, but now our markup and CSS are tightly coupled. With CSS Variables, we can pass in raw data, and let the CSS decide how it is used:

<dd class="bar" style="--start: 56">45%</dd>

In the CSS we can wire that up to grid-row-start:

.bar { grid-row-start: var(--start); }

We've replaced the class-name loop, and bloated 100-class output, with a single line of dynamic CSS. Variables also remove the danger normally associated with CSS-in-HTML. While an inline property like grid-row-start will be nearly impossible to override from a CSS file, the inline variable can simply be ignored by CSS. There are no specificity/cascade issues to worry about.

Data-Driven Backgrounds

As a bonus, we can do more with the data than simply provide a grid-position – reusing it to style a fallback option, or even adjust the bar colors based on that same data:

.bar { background-image: linear-gradient(to right, green, yellow, orange, red); background-size: 1600% 100%; /* turn the start value into a percentage for position on the gradient */ background-position: calc(var(--start) * 1%) 0; }

I started with a horizontal background gradient from green to yellow, orange, and then red. Then I used background-size to make the gradient much wider than the bar – at least 200% per color (800%). Larger gradient-widths will make the fade less visible, so I went with 1600% to keep it subtle. Finally, using calc() to convert our start position (1-100) into a percentage, I can adjust the background position left-or-right based on the value – showing a different color depending on the percentage.

The background grid is also generated using variables and background-gradients. Sadly, subpixel rounding makes it a bit unreliable, but you can play with the --line-every value to change the level of detail. Take a look around, and see what other improvements you can make!

Adding Scale [without Firefox]

Right now, we're passing in a start position rather than a pure value ("56" for "45%"). That start position is based on an assumption that the overall scale is 100%. In order to make this a more flexible tool, I thought it would be fun to contain all the math, including the scale, inside CSS. Here's what it would look like:

<dl class="chart" style="--scale: 100"> <dt class="date">2000</dt> <dd class="bar" style="--value: 45">45%</dd> <dt class="date">2001</dt> <dd class="bar" style="--value: 100">100%</dd> <!-- etc… --> </dl>

Then we can calculate the --start value in CSS, before applying it.

.bar { --start: calc(var(--scale) + 1 - var(--value)); grid-row-start: var(--start); }

With both the overall scale and individual values in CSS, we can manipulate either one individually. Change the scale to 200%, and watch the chart update accordingly:

See the Pen Bar Chart with Sale - no firefox by Miriam Suzanne (@mirisuzanne) on CodePen.

Both Chrome and Safari handle it beautifully, but Firefox seems unhappy about calc values in grid-positioning. I imagine they'll get it fixed eventually. For now, we'll just have to leave some calculations out of our CSS.

Sad, but we'll get used to it. &#x1f609;

There is much more we could do, providing fallbacks for older browsers – but I do think this is a viable option with potential to be accessible, semantic, performant, and beautiful. Thanks for starting that conversation, Robin!

More CSS Charts, with Grid & Custom Properties is a post from CSS-Tricks

CSS Utility Classes and “Separation of Concerns”

Fri, 08/11/2017 - 14:31

Adam Wathan takes us on a journey through the different ways we can approach HTML and CSS. This is a really great read that I bet will resonate with a lot of you, whether or not you agree with where he ends up.

Here's a particularly interesting bit where he specifically calls out "separation of concerns" as being a straw man:

You either have separation of concerns (good!), or you don't (bad!).This is not the right way to think about HTML and CSS.

Instead, think about dependency direction. There are two ways you can write HTML and CSS:

CSS that depends on HTML ... In this model, your HTML is restyleable, but your CSS is not reusable.

HTML that depends on CSS ... In this model, your CSS is reusable, but your HTML is not restyleable.

It occurs to me that there are fairly large contingents heading in both directions with styling. One direction is headed toward tightly coupled CSS (i.e. `.vue` files with scoped styles living right next to the template HTML). The other direction is styling classes that are completely de-coupled from HTML (i.e. atomic CSS).

What seems to be least popular is loosely-coupled global styles.

Direct Link to ArticlePermalink

CSS Utility Classes and “Separation of Concerns” is a post from CSS-Tricks

Improving Conversations using the Perspective API

Fri, 08/11/2017 - 13:30

I recently came across an article by Rory Cellan-Jones about a new technology from Jigsaw, a development group at Google focused on making people safer online through technology. At the time they'd just released the first alpha version of what they call The Perspective API. It's a machine learning tool that is designed to rate a string of text (i.e. a comment) and provide you with a Toxicity Score, a number representing how toxic the text is.

The system learns by seeing how thousands of online conversations have been moderated and then scores new comments by assessing how "toxic" they are and whether similar language had led other people to leave conversations. What it's doing is trying to improve the quality of debate and make sure people aren't put off from joining in.

As the project is still in its infancy it doesn't do much more than that. Still, we can use it!

Starting with the API

To get started with using the API, you'll need to request API access from their website. I managed to get access within a few days. If you're interested in playing with this yourself, know that you might need to wait it out until they email you back. Once you get the email saying you have access, you'll need to log in to the Google Developer Console and get your API key. Create your credentials with the amount of security you'd like and then you're ready to get going!

Now you'll need to head over to the documentation on GitHub to learn a bit more about the project and find out how it actually works. The documentation includes lots of information about what features are currently available and what they're ultimately designed to achieve. Remember: the main point of the API is to provide a score of how toxic a comment is, so to do anything extra with that information will require some work.

Getting a Score with cURL

Let's use PHP's cURL command to make the request and get the score. If you're not used to cURL, don't panic; it's relatively simple to get the hang of. If you want to try it within WordPress, it's even easier because there are a native WordPress helper functions you can use. Let's start with the standard PHP method.

Whilst we walk through this, it's a good idea to have the PHP documentation open to refer to. To understand the fundamentals of cURL, we'll go through a couple of the core options we may need to use.

$params = array( 'comment' => array( 'text' => 'what a stupid question...', 'languages' => array( 'en' ), 'requestedAttributes' => array( 'TOXICITY' => '' ) ) ); $params = json_encode($params); $req = curl_init(); curl_setpot($req, 'CURLOPT_URL', 'https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze'); curl_setpot($req, 'CURLOPT_POSTFIELDS', $params); curl_setopt($req, CURLOPT_HTTPHEADER, array('Content-Type: application/json'); curl_exec($req); curl_close($req);

The above seven lines simply perform different actions when you want to make a cURL request to a server. You'll need to initialize the cURL request, set the options for the request, execute it, then close the connection. You'll then get your comment data back from the server in the form of JSON data which is handy for a number reasons.

Send An Ajax Request

As you get the response from the API in JSON format, you can also make an Ajax request to the API as well. This is handy if you don't want to dive too much into PHP and the method of using cURL requests. An example of an Ajax request (using jQuery) would look something like the following:

$.ajax({ data: { comment: { text: "this is such a stupid idea!!" }, languages: ["en"], requestedAttributes: { TOXICITY: {} } }, type: 'post', url: 'https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=YOUR-API-KEY', success: function(response) { console.log(response); } });

The data we get back is now logged to the console ready for us to debug it. Now we can decode the JSON data into an array and do something with it. Make sure you include your API key at the end of the URL in the Ajax request too otherwise it won't work! Without it; you'll get an error about your authentication being invalid. Also, you don't have to stop here. You could also take the example above a step further and log the score in a database as soon as you've got the data back, or provide feedback to the user on the front-end in the form of an alert.

The WordPress Way

If you're using WordPress (which is relevant here since WordPress has comment threads you might want to moderate) and you want to make a cURL request to the Perspective API, then it's even simpler. Using the Toxic Comments plugin as an example, you can do the following instead thanks to WordPress' exhaustive built-in functions. You won't need to do any of the following if you use the plugin, but it's worth explaining what the plugin does behind the scenes to achieve what we want to do here.

$request = wp_remote_post($arguments, $url);

This will make a post request to the external resource for us without doing much leg work for it. There are other functions that you can use too, like a get request but we don't need to think about that right now. You then need to use another function to get the requested data back from the server. Yes, you're completely right. WordPress has a function for that:

$data = wp_remote_retrieve_body($request);

So that's great, but how do we actually use the API to get the data we want? Well, to start with if you just want to get the overall toxicity score, you'll need to use the following URL which will ask the API to read the comment and score it. It also has your API key at the end which you need to authenticate your request. Make sure you change it to yours!

https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=YOUR-API-KEY

It looks quite plain and if you visit it, it'll take you to a 404 page. But if you make a cURL request to it, either through your favorite CMS or via a simple PHP script, you'll end up getting data that might look similar to this:

{ "attributeScores": { "TOXICITY": { "summaryScore": { "value": 0.567890, "type": "PROBABILITY" } } }, "languages": [ "en" ] }

The score you'll get back from the API will be a number as a decimal. So if a comment gets a score of 50% toxicity, the score you'll actually get back from the API will be 0.5. You can then use this score to manipulate the way the comment is stored and shown to the end user by marking it as spam or creating a filter to let users show less or more toxic comments, much like Google has done in their example.

There are other bits of useful data you may want to look into as well. Things such as the context of the comment which can help you understand the intent of the comment without reading it firsthand.

Ultimately, with this kind of data we can expect to receive, it makes it possible to filter out certain comments with particular intent and provide a nicer comment area where trolls can often take over. Over time when the API becomes more developed, we should expect the scoring to be more robust and more accurate on the analysis of the comment we send it.

Privacy and Censorship

This is a pretty hot topic these days. I can imagine some pushback on this, particularly because it involves sending your data to Google to have it analyzed and judged Google computers, which ultimately does have effect on your voice and ability to use it. Personally, I think the idea behind this is great and it works very well in practice. But when you think about it's implementation on popular news websites and social media platforms, you can see how privacy and censorship could be a concern.

The Perspective API makes a great effort to score comments based on a highly complex algorithm, but it seems that there is still a long way to go yet in the fight to maintain more civil social spaces online.

Until then, play around with the API and let me know what you think! If you're not up for writing something from scratch, there are some public client libraries available now in both Node and Python so go for it! Also, remember to err on the side of caution as the API is still in an alpha phase for now so things may break. If you're feeling lazy, check out the quick start guide.

Improving Conversations using the Perspective API is a post from CSS-Tricks

“Combine the transparency of a PNG with the compression of a JPG”

Thu, 08/10/2017 - 15:42

JPG doesn't support alpha transparency. PNGs that do support alpha transparency don't compress nearly as well as JPG. SVG has masks and clipping paths, which we can use to our advantage here.

Direct Link to ArticlePermalink

“Combine the transparency of a PNG with the compression of a JPG” is a post from CSS-Tricks

Pages