Subscribe to CSS-Tricks feed
Tips, Tricks, and Techniques on using Cascading Style Sheets.
Updated: 9 hours 53 min ago


Fri, 10/20/2017 - 20:41

It was awesome to hear Charlotte Dann on CodePen Radio the other day, who is Kickstarting a new jewelry business. The idea is that you draw your own jewelry (everything you draw looks awesome because it's on this interesting hexagon grid) and then it gets actually made. This tying together of her passions sprang to life on CodePen.

Direct Link to ArticlePermalink

Hexatope is a post from CSS-Tricks

Breaking down CSS Box Shadow vs. Drop Shadow

Fri, 10/20/2017 - 15:57

Drop shadows. Web designers have loved them for a long time to the extent that we used to fake them with PNG images before CSS Level 3 formally introduced them to the spec as the box-shadow property. I still reach for drop shadows often in my work because they add a nice texture in some contexts, like working with largely flat designs.

Not too long after box-shadow was introduced, a working draft for CSS Filters surfaced and, with it, a method for drop-shadow() that looks a lot like box-shadow at first glance. However, the two are different and it's worth comparing those differences.

For me, the primary difference came to light early on when I started working with box-shadow. Here's a simple triangle not unlike the one I made back then.

See the Pen CSS Caret by CSS-Tricks (@css-tricks) on CodePen.

Let's use this to break down the difference between the two.

Box Shadow

Add a box-shadow on that bad boy and this happens.

See the Pen CSS Caret Box Shadow by CSS-Tricks (@css-tricks) on CodePen.

It's annoying, but makes sense. CSS uses a box model, where the element's edges are bound in the shape of a rectangle. Even in cases where the shape of the element does not appear to be a box, the box is still there and that is was box-shadow is applied to. This was my "ah-ha moment" when understanding the box in box-shadow.

CSS Filter Drop Shadow

CSS Filters are pretty awesome. Take a gander at all the possibilities for adding visual filters on elements and marvel at how CSS suddenly starts doing a lot of things we used to have to mockup in Photoshop.

Filters are not bound to the box model. That means the outline of our triangle is recognized and the transparency around it is ignored so that the intended shape receives the shadow.

See the Pen CSS Caret Drop Shadow by CSS-Tricks (@css-tricks) on CodePen.

Deciding Which Method to Use

The answer is totally up to you. The simple example of a triangle above might make it seem that filter: drop-shadow() is better, but it's not a fair comparison of the benefits or even the possibilities of both methods. It's merely an illustration of their different behaviors in a specific context.

Like most things in development, the answer of which method to use depends. Here's a side-by-side comparison to help distinguish the two and when it might be best to choose one over the other.

Box Shadow Drop Shadow Specification CSS Backgrounds and Borders Module Level 3 Filter Effects Module Level 1 Browser Support Great Good Supports Spread Radius Yes, as an optional fourth value No Blur Radius Calculation is based on a pixel length Calculation is based on the stdDeviation attribute of the SVG filter Supports inset shadows Yes No Performance Not hardware accelerated Hardware accelerated in browsers that support it. It's a heavy lift without it. Wrapping Up

The difference between box-shadow and filter: drop-shadow() really boils down to the CSS box model. One sees it and the other disregards it. There are other differences that distinguish the two in terms of browser support, performance and such, but the way the two treat the box model is the key difference.

Update: Amelia identified another key difference in the comments where the spread of the radius for drop-shadow() is calculated differently than box-shadow and even that of text-shadow. That means that the spread radius you might specify in box-shadow is not one-to-one with the default spread value for drop-shadow, so the two are not equal replacements of one another in some cases.

Let's cap this off with a few other great examples illustrating that. Lennart Schoors also has a nice write-up with practical examples using tooltips and icons that we previously called out.

See the Pen Drop-shadow vs box-shadow (2) by Kseso (@Kseso) on CodePen.

See the Pen box-shadow & drop-shadow by qnlz (@qnlz) on CodePen.

See the Pen Drop-shadow vs box-shadow (3) en png´s by Kseso (@Kseso) on CodePen.

Breaking down CSS Box Shadow vs. Drop Shadow is a post from CSS-Tricks

MDN Product Advisory Board

Thu, 10/19/2017 - 15:09

We all know and love MDN for already being the best documentation for web features out there. It looks like it's poised to get even better with Google and Microsoft both joining a new board.

Mozilla's vision for the MDN Product Advisory Board is to build collaboration that helps the MDN community collectively maintain MDN as the most comprehensive, complete, and trusted reference documenting the most important aspects of modern browsers and web standards.

Interesting none of them mentioned WebPlatform, the previous attempt at this that kinda fizzled out. This effort seems a little more likely to succeed as it already has a successful foundation, actual staff, and a benevolent dictator in Mozilla. It's great to see browsers complete on user features but cooperate on standards and education.

Worth a shout that we dabble in "docs" for CSS features ourselves here at CSS-Tricks with the Almanac, but if anything in there is worth taking for a unified resource like MDN, be our guest. Not to mention everything public on CodePen is MIT, and there are loads of Pens that demonstrate web features wonderfully. For instance, that's why I built this one.

Direct Link to ArticlePermalink

MDN Product Advisory Board is a post from CSS-Tricks

5 Tips for Starting a Front-End Refactor

Thu, 10/19/2017 - 10:20

For the last two weeks, I've been working on a really large refactor project at Gusto and I realize that this is the first time that a project like this has gone smoothly for me. There haven't been any kinks in the process, it took about as much time as I thought it would, and no-one appears to be mad at me. In fact, things have gone almost suspiciously well. How did this happen and what was the issue?

Well, we had a problem with how our CSS was organized. Some pages in our app loaded Bootstrap and some didn't. Others were loading only our app styles and some weren't loading the styles we served from our component library, a separate repo that includes all our forms, buttons, and variables, etc. This led to all sorts of design inconsistencies but most important of all it wasn't clear how to write CSS in our app. Do the component library styles override Bootstrap? Does Bootstrap override the app styles? It was scary stuff.

The goal was a rather complex one then. First, I needed to figure out how our styles were loaded in the app. Second, I wanted to remove Bootstrap from our node_modules and make a new Sass file with all those styles. Third, I then had to remove all our Bootstrap styles and place them into the component library where we would be able to refactor every line of Bootstrap into each individual component (so all the styles for the Tabs.jsx component was styled only by the Tabs.scss file). All of this work should reduce the amount of CSS we write by thousands of lines and generally make our codebase more readable and much more developer-friendly for the future.

However, before I started I knew that everything would have to be extraordinarily organized because this would involve a big shakeup in terms of our codebase. There will be spreadsheets! There will be a single, neat and tidy pull request! Lo, and behold! There will be beautiful, glorious documentation for everyone to read.

So here are some tips on making sure that big refactor projects go smoothly, based on my experience working on this large and complex codebase. Let's begin!

Tip #1: Gather as much data as you can

For this project, I needed to know a bunch of stuff beforehand, namely in the form of data. This data would then serve as metrics for success because if I could show that we could safely remove 90% of the CSS in the app then that's a huge win that the rest of the team can celebrate.

My favorite tool for refactoring CSS these days is Chrome's Coverage tab in DevTools and that's because it can show you just how much CSS is applied to any given page. Take a look here:

And it showed me everything I needed: the number of CSS files we generated, the overall size of those files and how much CSS we can safely delete from that page.

Tip #2: Make a list of everything you have to do

The very first refactor project I worked on at Gusto was a complete nightmare because I jumped straight into the codebase and started blowing stuff up. I'd remove a class here, an element there, and soon enough I found myself having changed thousands of lines of CSS and breaking hundreds of automated tests. Of course, all of this was entirely my fault and it caused a bunch of folks to get mad at me, and rightly so!

This was because I hadn't written down a list of everything that I wanted to do and the order I needed to do it in. Once you do this you can begin to understand just how big the scope of the project really is.

Tip #3: Keep focused

The second reason I made such a huge mistake on my first refactor project was that I wasn't focused on a very specific task. I'd just change things depending on how I felt, which isn't any way to manage a project.

But once you've made that list of tasks you can then break it down even further into each individual pull request that you'll have to make. So you might already be doing this but I would thoroughly recommend trying to keep each commit as focused and as small as you can. You'll need to be patient, a quality I certainly lack, and determined. Slow, small steps during a refactoring project is always better than a single large, unfocused pull request with dozens of unrelated commits in them.

If you happen to notice a new issue with the CSS or with the design as you're refactoring then don't rush to fix it, trust me. It'll be a distraction. Instead, focus on the task at hand. You can always return to that other thing later.

Tip #4: Tell everyone you're working on this project

This might just be something that I struggle with personally but I never realized until recently just how much of front-end development is a community effort. The real difficulty of the work doesn't depend on whether you know the fanciest CSS technique, but rather how willing you are to communicate with the rest of your team.

Tell everyone you're working on this project to make sure that there isn't anything you overlooked. Refactoring large codebases can lead to edge cases that can then lead to overwhelmingly painful customer-facing problems. In our case, if I messed up the CSS then potentially thousands of people wouldn't be paid that week by our app. Every little bit of information can only help you make this refactor process as efficient and as smooth as possible.

Tip #5: Document as much as you can

I really wish I could find the precise quote, but somewhere deep in Ellen Ullman's excellent book Life in Code she writes about how programming is sort of like a bad translation of a book. Outside the codebase we have ideas, thoughts, knowledge. And when we convert those ideas into code something inexplicable is lost forever, regardless of how good you are at programming.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "0374534519"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "195e7b7f883bf4a3b3340e6144828552";

One small tip that helps that translation process is writing really detailed messages in the pull request itself. Why are you doing this? How are you going about it? How do you plan to test that your refactor didn't break everything? That's precisely the sort of information that will help someone in the future learn more about your original intent and what your goals were.

Wrapping up

I learnt all this stuff the really hard, long stupid way. But if you happen to follow these tips then large front-end projects are bound to go a whole lot smoother, both for you and your team. I guarantee it.

5 Tips for Starting a Front-End Refactor is a post from CSS-Tricks

Sponsor: Media Temple

Thu, 10/19/2017 - 10:03

(This is a sponsored post.)

Media Temple is my web host here at CSS-Tricks. I still remember what it was like buying my first web hosting, pointing a domain name to it, FTPing into that server, and having the files I put there appear in the web browser. Powerful stuff, kids. Watch out or you might try to turn it into a career!

I've upgraded my server a few times since then, but it's still a pretty standard grade Media Temple server that happily hosts this site with little trouble. When there is anything weird, I use the same support system to get help as is available to anybody else, and they always go out of there way to investigate what's going on and explain what's up.

Direct Link to ArticlePermalink

Sponsor: Media Temple is a post from CSS-Tricks

A Look Back at the History of CSS

Wed, 10/18/2017 - 17:00

When you think of HTML and CSS, you probably imagine them as a package deal. But for years after Tim Berners-Lee first created the World Wide Web in 1989, there was no such thing as CSS. The original plan for the web offered no way to style a website at all.

There's a now-infamous post buried in the archives of the WWW mailing list. It was written by Marc Andreessen in 1994, who would go on to co-create both the Mosaic and Netscape browsers. In the post, Andreessen remarked that because there was no way to style a website with HTML, the only thing he could tell web developers when asked about visual design was, “sorry you're screwed.

10 years later, CSS was on its way to full adoption by a newly enthused web community. *W**hat happened along the way?*

Finding a Styling Language

There were plenty of ideas for how the web could theoretically be laid out. However, it just was not a priority for Berners-Lee because his employers at CERN were mostly interested in the web as a digital directory of employees. Instead, we got a few competing languages for web page layout from developers across the community, most notably from Pei-Yaun Wei, Andreesen, and Håkon Wium Lie.

Take Pei-Yuan Wei, who created the graphical ViolaWWW Browser in 1991. He incorporated his own stylesheet language right into his browser, with the eventual goal of turning this language into an official standard for the web. It never quite got there, but it did provide some much-needed inspiration for other potential specifications.

ViolaWWW upon release

In the meantime, Andreessen had taken a different approach in his own browser, Netscape Navigator. Instead of creating a decoupled language devoted to a website's style, he simply extended HTML to include presentational, unstandardized HTML tags that could be used to design web pages. Unfortunately, it wasn't long before web pages lost all semantic value and looked like this:

<MULTICOL COLS="3" GUTTER="25"> <P><FONT SIZE="4" COLOR="RED">This would be some font broken up into columns</FONT></P> </MULTICOL>

Programmers were quick to realize that this kind of approach wouldn't last long. There were plenty of ideas for alternatives. Like RRP, a stylesheet language that favored abbreviation and brevity, or PSL96 a language that actually allowed for functions and conditional statements. If you’re interested in what these languages looked like, Zach Bloom wrote an excellent deep dive into several competing proposals.

But the idea that grabbed everyone's attention was first proposed by Håkon Wium Lie in October of 1994. It was called Cascading Style Sheets, or just CSS.

Why We Use CSS

CSS stood out because it was simple, especially compared to some of its earliest competitors.

window.margin.left = 2cm font.family = times h1.font.size = 24pt 30%

CSS is a declarative programming language. When we write CSS, we don't tell the browser exactly how to render a page. Instead, we describe the rules for our HTML document one by one and let browsers handle the rendering. Keep in mind that the web was mostly being built by amateur programmers and ambitious hobbyists. CSS followed a predictable and perhaps more importantly, forgiving format and just about anyone could pick it up. That's a feature, not a bug.

CSS was, however, unique in a singular way. It allowed for styles to cascade. It's right there in the name. Cascading Style Sheets. The cascade means that styles can inherit and overwrite other styles that had previously been declared, following a fairly complicated hierarchy known as specificity. The breakthrough, though, was that it allowed for multiple stylesheets on the same page.

See that percentage value above? That's actually a pretty important bit. Lie believed that both users and designers would define styles in separate stylesheets. The browser, then, could act as a sort of mediator between the two, and negotiate the differences to render a page. That percentage represents just how much ownership this stylesheet is taking for a property. The less ownership, the more likely it was to be overwritten by users. When Lie first demoed CSS, he even showed off a slider that allowed him to toggle between user-defined and developer-defined styles in the browser.

This was actually a pretty big debate in the early days of CSS. Some believed that developers should have complete control. Others that the user should be in control. In the end, the percentages were removed in favor of more clearly defined rules about which CSS definitions would overwrite others. Anyway, that's why we have specificity.

Shortly after Lie published his original proposal, he found a partner in Bert Bos. Bos had created the Argo browser, and in the process, his own stylesheet language, pieces of which eventually made its way into CSS. The two began working out a more detailed specification, eventually turning to the newly created HTML working group at the W3C for help.

It took a few years, but by the end of 1996, the above example had changed.

html { margin-left: 2cm; font-family: "Times", serif; } h1 { font-size: 24px; }

CSS had arrived.

The Trouble with Browsers

While CSS was still just a draft, Netscape had pressed on with presentational HTML elements like multicol, layer, and the dreaded blink tag. Internet Explorer, on the other hand, had taken to incorporating some of CSS piecemeal. But their support was spotty and, at times, incorrect. Which means that by the early aughts, after five years of CSS as an official recommendation, there were still no browsers with full CSS support.

That came from kind of a strange place.

When Tantek Çelik joined Internet Explorer for Macintosh in 1997, his team was pretty small. A year later, he was made the lead developer of the rendering engine at the same as his team was cut in half. Most of the focus for Microsoft (for obvious reasons) was on the Windows version of Internet Explorer, and the Macintosh team was mostly left to their own devices. So Starting with the development of version 5 in 2000, Çelik and his team decided to put their focus where no one else was, CSS support.

It would take the team two years to finish version 5. During this time, Çelik spoke frequently with members of the W3C and web designers using their browser. As each piece slid into place, the IE-for-Mac team verified on all fronts that they were getting things just right. Finally, in March of 2002, they shipped Internet Explorer 5 for Macintosh. The first browser with full CSS Level 1 support.

Doctype Switching

But remember, the Windows version of Internet Explorer had added CSS to their browser with more than a few bugs and a screwy box model, which describes the way elements are calculated and then rendered. Internet Explorer included attributes like margin and padding inside the total width and height of an element. But IE5 for Mac, and the official CSS specification called for these values to be added to the width and height. If you ever played around with box-sizing you know exactly the difference.

Çelik knew that in order to make CSS work, these differences would need to be reconciled. His solution came after a conversation with standards-advocate Todd Fahrner. It's called doctype switching, and it works like this.

We all know doctypes. They go at the very top of our HTML documents.

<!DOCTYPE html>

But in the old days, they looked like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">

That's an example of a standards-compliant doctype. The //W3C//DTD HTML 4.0//EN is the crucial bit. When a web designer added this to their page the browser would know to render the page in "standards mode," and CSS would match the specification. If the doctype was missing, or an out of date one was in use, the browser would switch to "quirks mode" and render things according to the old box model and with old bugs intact. Some designers even intentionally opted to put their site into quirks mode in order to get back the older (and incorrect) box model.

Eric Meyer, sometimes referred to as the godfather of CSS, has gone on record and said doctype switching saved CSS. He's probably right. We would still be using browsers packed with old CSS bugs if it weren't for that one, simple trick.

The Box Model Hack

There was one last thing to figure out. Doctype switching worked fine in modern browsers on older websites, but the box model was still unreliable in older browsers (particularly Internet Explorer) for newer websites. Enter the Box Model Hack, a clever trick from Çelik that took advantage of a little-used CSS attribute called voice-family to trick browsers and allow for multiple widths and heights in the same class. Çelik instructed authors to put their old box model width first, then close the tag in a small hack with voice-family, followed by their new box model width. Sort of like this:

div.content { width: 400px; voice-family: ""}""; voice-family: inherit; width: 300px; }

Voice-family was not recognized in older browsers, but it did accept a string as its definition. So by adding an extra } older browsers would simply close the CSS rule before ever getting to that second width. It was simple and effective and let a lot of designers start experimenting with new standards quickly.

The Pioneers of Standards-Based Design

Internet Explorer 6 was released in 2001. It would eventually become a major thorn in the side of web developers everywhere, but it actually shipped with some pretty impressive CSS and standards support. Not to mention its market share hovering around 80%.

The stage was set, the pieces were in place. CSS was ready for production. Now people just needed to use it.

In the 10 years that the web hurtled towards ubiquity without a coherent or standard styling language, it's not like designers had simply stopped designing. Not at all. Instead, they relied on a backlog of browser hacks, table-based layouts, and embedded Flash files to add some style when HTML couldn't. Standards-compliant, CSS-based design was new territory. What the web needed was some pioneers to hack a path forward.

What they got was two major redesigns just a few months apart. The first from Wired followed soon after by ESPN.

Douglas Bowman was in charge of the web design team for Wired magazine. In 2002, Bowman and his team looked around and saw that no major sites were using CSS in their designs. Bowman felt almost an obligation to a web community that looked to Wired for examples of best practices to redesign their site using the latest, standards-compliant HTML and CSS. He pushed his team to tear everything down and redesign it from scratch. In September of 2002, they pulled it off and launched their redesign. The site even validated.

ESPN released their site just a few months later, using many of the same techniques on an even larger scale. These sites took a major bet on CSS, a technology that some thought might not even last. But it paid off in a major way. If you pulled aside any of the developers that worked on these redesigns, they would give you a laundry list of major benefits. More performant, faster design changes, easier to share, and above all, good for the web. Wired even did daily color changes in the beginning.

Dig through the code of these redesigns, and you'd be sure to find some hacks. The web still only lived on a few different monitor sizes, so you may notice that both sites used a combination of fixed width columns and relative and absolute positioning to slot a grid into place. Images were used in place of text. But these sites laid the groundwork for what would come next.

CSS Zen Garden and the Semantic Web

The following year, in 2003, Jeffrey Zeldman published his book Designing with Web Standards, which became a sort of handbook for web designers looking to switch to standards-based design. It kicked off a legacy of CSS techniques and tricks that helped web designers imagine what CSS could do. A year later, Dave Shea launched the CSS Zen Garden, which encouraged designers to take a basic HTML page and lay it out differently using just CSS. The site became a showcase of the latest tips and tricks, and went a long way towards convincing folks it was time for standards.

Slowly but surely, the momentum built. CSS advanced, and added new attributes. Browsers actually raced to implement the latest standards, and designers and developers added new tricks to their repertoire. And eventually, CSS became the norm. Like it had been there all along.

Enjoy learning about web history? Jay Hoffmann has a weekly newsletter called The History of the Web you can sign up for here.

A Look Back at the History of CSS is a post from CSS-Tricks

On-Site Search

Wed, 10/18/2017 - 13:13

CSS-Tricks is a WordPress site. WordPress has a built-in search feature, but it isn't tremendously useful. I don't blame it, really. Search is a product onto itself and WordPress is a CMS company, not a search company.

You know how you can make a really powerful search engine for your site?

Here you go:

<form action="https://google.com/search" target="_blank" type="GET"> <input type="search" name="q"> <input type="submit" value="search"> </form>

Just a smidge of JavaScript trickery to enforce the site it searches:

var form = document.querySelector("form"); form.addEventListener("submit", function(e) { e.preventDefault(); var search = form.querySelector("input[type=search]"); search.value = "site:css-tricks.com " + search.value; form.submit(); });

I'm only 12% joking there. I think sending people over to Google search results for just your site for their search term is perfectly acceptable. Nobody will be confused by that. If anything, they'll be silently pleased.

Minor adjustments could send them to whatever search engine. Like DuckDuckGo:



  1. They will leave your site
  2. They will see ads

To prevent #1, Google has long-offered a site search product where you can create and configure a custom search engine and embed it on your own site.

There has been lots of news about Google shutting down that service. For example, "Google site search is on the way out. Now what?" Eeek! This was quite confusing to me.

Turns out, what they are really are shutting down what is known as Google Site Search (GSS), which is an enterprise product. It shuts down entirely on April 1, 2018. Google has another product called Google Custom Search Engine (CSE) that doesn't seem to be going anywhere.

CSE is the thing I was using anyway. It has a free edition which has ads, and you can pay to remove them, although the pricing for that is also very confusing. I literally can't figure it out. For a site like CSS-Tricks, it will be hundreds or possibly thousands a year, best I can tell. Or you can hook up your own AdSense and at least attempt to make money off the ads that do show.

In the wake of all that, I thought I'd try something new with search. Algolia is a search product that I'd heard of quite a few people try, and it seems pretty beloved. With a little help from the wonderfully accommodating Algolia team, we've had that going for the last few months.

If we were to set up an implementation difficulty scale where my HTML/JavaScript form up there is a 1 and spinning up your own server and feeding Solr a custom data structure and coming up with your own rating algorithms is a 10, Algolia is like a 7. It's pretty heavy duty nerdy stuff.

With Alogolia, you need to bring all your own data and stucture and get it over to Algolia, as all the search magic happens on their servers. Any new/changed/deleted data needs to be pushed there too. It's not your database, but generally any database CRUD you do will need to go to Algolia too.

On that same difficulty scale, if you're adding Algolia to a WordPress site, that goes down to a 3 or 4. WordPress already has it's own data structure and Algolia has a WordPress plugin to push it all to them and keep it all in sync. It's not zero work, but it's not too bad. The plugin also offers a UI/UX replacement over the default WordPress search form, which offers "instant results" as a dropdown. It really is amazingly fast. Submit the form anyway, and you're taken to a full-page search results screen that is also taken over by Algolia.

For disclosure, I'm a paying customer of Algolia and there is no sponsorship deal in place.

It's a pretty damn good product. As one point of comparison, I've gotten exactly zero feedback on the switch. Nobody has written in to tell me they noticed the change in search and now they can't find things as easily. And people write in to tell me stuff like that all the time, so not-a-peep feels like a win.

I'm paying $59 a month for superfast on-page search with no ads.

It's almost a no-brainer win, but there are a few downsides. One of them is the ranking of search results. It's pretty damn impressive out of the box, returning a far more relevant set of results than native WordPress search would. But, no surprise, it's no Google. Whatever internal magic is happening is trying it's best, but it just doesn't have the data Google has. All it has is a bunch of text and maybe some internal linking data.

There are ways to make it better. For example, you can hook up your Google Analytics data to Algolia, essentially feeding it popularity data, so that Algolia results start to look more like Google results. It's not a trivial to set up, but probably worth it!


What do y'all use for search on your sites?

On-Site Search is a post from CSS-Tricks

I haven’t experienced imposter syndrome, and maybe you haven’t either

Tue, 10/17/2017 - 17:31

In recent years it’s become trendy to discuss how we all apparently suffer from this imposter syndrome - an inability to internalize one's accomplishments and a persistent fear of being exposed as a “fraud”.

I take two issues with this:

  • it minimizes the impact that this experience has on people that really do suffer from it.
  • we’re labelling what should be considered positive personality traits - humility, an acceptance that we can’t be right all the time, a desire to know more, as a “syndrome” that we need to “deal with”, “get over” or “get past”.

It's not an officially recognized syndrome (yet?), but you can have medical diagnoses that are like imposter syndrome. A general feeling that you're faking it or don't know as much as you should isn't it.

Direct Link to ArticlePermalink

I haven’t experienced imposter syndrome, and maybe you haven’t either is a post from CSS-Tricks

Prettier + Stylelint: Writing Very Clean CSS (Or, Keeping Clean Code is a Two-Tool Game)

Tue, 10/17/2017 - 08:58

It sure is nice having a whole codebase that is perfectly compliant to a set of code style guidelines. All the files use the same indentation, the same quote style, the same spacing and line-break rules, heck, tiny things like the way zero's in values are handled and how keyframes are named.

It seems like a tall order, but these days, it's easier than ever. It seems to me it's become a two-tool game:

  1. A tool to automatically fix easy-to-fix problems
  2. A tool to warn about harder-to-fix problems

Half the battle: Prettier

Otherwise known as "fix things for me, please".

Best I can tell, Prettier is a fairly new project, only busting onto the scene in January 2017. Now in the last quarter of 2017, it seems like everybody and their sister is using it. They call it an Opinionated Code Formatter.

The big idea: upon save of a document, all kinds of code formatting happens automatically. It's a glorious thing to behold. Indentation and spacing is corrected. Quotes are consistent-ified. Semi colons are added.

Run Prettier over your codebase once and gone are the muddy commits full of code formatting cruft. (You might consider making a temporary git user so one user doesn't look like they've commited a bazillion lines of code more than another, if you care about that.) That alone is a damn nice benefit. It makes looking through commits a heck of a lot easier and saves a bunch of grunt work.

As this post suggest, Prettier is only half the battle though. You'll notice that Prettier only supports a handful of options. In fact, I'm pretty sure when it launched it didn't have any configuration at all. Opinionated indeed.

What it does support are things that are easy to fix, requiring zero human brainpower. Use double quotes accidentally (uggkch muscle memory) when your style guide is single quotes? Boom - changed on save.

There are other potential problems that aren't as easy to fix. For example, you've used an invalid #HEX code. You probably wouldn't want a computer guessing what you meant there. That's better to just be visually marked as an error for you to fix.

That's where this next part comes in.

The other half of the battle: Stylelint

Otherwise known as "let me know about problem, so I can fix them".

Stylelint is exactly that. In fact, in that GIF above show Prettier do it's thing, you saw some red dots and red outlines in my Sublime Text editor. That wasn't Prettier showing me what it was going to fix (Prettier displays no errors, it just fixes what it can). That was Stylelint running it's linting and showing me those errors.

Whereas Prettier supports 10ish rules, Stylelint supports 150ish. There is a standard configuration, but you can also get as fine-grained as you want there and configure how you please. David Clark wrote about it here on CSS-Tricks last year.

With these warnings so clearly visible, you can fix them up by hand quickly. It becomes rather second nature.

Getting it all going

These tools work in a wide variety of code editors.

These are the Prettier editor integrations. Between all these, that probably covers 96% webdevnerds.

It's very easy to think "I'll just install this into my code editor, and it will work!" That gets me every time. Getting these tools to work is again a two-part game.

  1. Install code editor plugin.
  2. Do the npm / yarn installation stuff. These are node-based tools. It doesn't mean your project needs to have anything to do with node in production, this is a local development dependency.

These are intentionally separated things. The meat of these tools is the code that parses your code and figures out the problems it's going to fix. That happens through APIs that other tools can call. That means these tools don't have to be rewritten and ported to work in a new environment, instead, that new environment calls the same APIs everyone else does and does whatever it needs to do with the results.

Above is a barebones project in Sublime Text with both Prettier and Stylelint installed. Note the `package.json` shows we have our tools installed and I'm listing my "packages" so you can see I have the Sublime Text Plugin jsPrettier installed. You can also see the dotfiles there that configure the rules for both tools.

Don't let the "js" part mislead you. You could use this setup on the CSS of your WordPress site. It really doesn't matter what your project is.

Getting more exotic

There is certainly leveling up that can happen here. For example:

  • You might consider configuring Stylelint to ignore problems that Prettier fixes. They are going to be fixed anyway, so why bother looking at the errors.
  • You might consider updating your deployment process to stop if Stylelint problems are found. Sometimes Stylelint is showing you an error that will literally cause a problem, so it really shouldn't go to production.
  • We mostly talked about CSS here, but JavaScript is arguably even more important to lint (and Prettier supports as well). ES Lint is probably the way to go here. There are also tools like Rubocop for Ruby, and I'm sure linters for about every language imaginable.

Prettier + Stylelint: Writing Very Clean CSS (Or, Keeping Clean Code is a Two-Tool Game) is a post from CSS-Tricks

The Art of Comments

Mon, 10/16/2017 - 15:23

I believe commenting code is important. Most of all, I believe commenting is misunderstood. I tweeted out the other day that "I hear conflicting opinions on whether or not you should write comments. But I get thank you's from junior devs for writing them so I'll continue." The responses I received were varied, but what caught my eye was that for every person agreeing that commenting was necessary, they all had different reasons for believing this.

Commenting is a more nuanced thing than we give it credit for. There is no nomenclature for commenting (not that there should be) but lumping all comments together is an oversimplification. The example in this comic that was tweeted in response is true:

From Abstrusegoose

This is where I think a lot of the misconceptions of comments lie. The book Clean Code by Robert C. Martin talks about this: that comments shouldn't be necessary because code should be self-documenting. That if you feel a comment is necessary, you should rewrite it to be more legible. I both agree and disagree with this. In the process of writing a comment, you can often find things that could be written better, but it's not an either/or. I might still be able to rewrite that code to be more self-documenting and also write a comment as well, for the following reason:

Code can describe how, but it cannot explain why.

This isn't a new concept, but it's a common theme I notice in helpful comments that I have come across. The ability to communicate something that the code cannot, or cannot concisely.

All of that said, there is just not one right way or one reason to write a comment. In order to better learn, let's dig into some of the many beneficial types of comments that might all serve a different purpose, followed by patterns we might want to avoid.

Good comments What is the Why

Many examples of good comments can be housed under this category. Code explains what you'd like the computer to take action on. You'll hear people talk about declarative code because it describes the logic precisely but without describing all of the steps like a recipe. It lets the computer do the heavy lifting. We could also write our comments to be a bit more declarative

/* We had to write this function because the browser interprets that everything is a box */

This doesn't describe what the code below it will do. It doesn't describe the actions it will take. But if you found a more elegant way of rewriting this function, you could feel confident in doing so because your code is likely the solution to the same problem in a different way.

Because of this, less maintenance is required (we'll dig more into this further on). If you found a better way to write this, you probably wouldn't need to rewrite the comment. You could also quickly understand whether you could rewrite another section of code to make this function unnecessary without spending a long time parsing all the steps to make the whole.

Clarifying something that is not legible by regular human beings

When you look at a long line of regex, can you immediately grok what's going on? If you can, you're in the minority, and even if you can at this moment, you might not be able to next year. What about a browser hack? Have you ever seen this in your code?

.selector { [;property: value;]; }

what about

var isFF = /a/[-1]=='a';

The first one targets Chrome ≤ 28, Safari ≤ 7, Opera ≥ 14, the second one is Firefox versions 2-3. I have written code that needs something like this. In order to avoid another maintainer or a future me assuming I took some Salvia before heading to work that day, it's great to tell people what the heck that's for. Especially in preparation for a time when we don't have to support that browser anymore, or the browser bug is fixed and we can remove it.

Something that is clear and legible to you is not necessarily clear to others

Who's smart? We are! Who writes clean code? We do! We don't have to comment, look how clear it is. The problem with this way of thinking is that we all have deeper knowledge in different areas. On small teams where people's skillsets and expertise are more of a circle than a venn diagram, this is less of an issue than big groups that change teams or get junior devs or interns frequently. But I'd probably still make room for those newcomers or for future you. On bigger teams where there are junior engineers or even just engineers from all types of background, people might not outrightly tell you they need you to comment, but many of these people will also express gratitude when you do.

Comments like chapters of a book

If this very article was written as one big hunk rather than broken up into sections with whitespace and smaller headings, it would be harder to skim through. Maybe not all of what I'm saying applies to you. Commenting sections or pieces allows people to skip to a part most relevant to them. But alas! You say. We have functional programming, imports, and modules for this now.

It's true! We break things down into smaller bits so that they are more manageable, and thank goodness for that. But even in smaller sections of code, you'll necessarily come to a piece that has to be a bit longer. Being able quickly grasp what is relevant or a label for an area that's a bit different can speed up productivity.

A guide to keep the logic straight while writing the code

This one is an interesting one! These are not the kind of comments you keep, and thus could also be found in the "bad patterns" section. Many times when I'm working on a bigger project with a lot of moving parts, breaking things up into the actions I'm going to take is extremely helpful. This could look like

// get the request from the server and give an error if it failed // do x thing with that request // format the data like so

Then I can easily focus on one thing at a time. But when left in your code as is, these comments can be screwy to read later. They're so useful while you're writing it but once you're finished can merely be a duplication of what the code does, forcing the reader to read the same thing twice in two different ways. It doesn't make them any less valuable to write, though.

My perfect-world suggestion would be to use these comments at the time of writing and then revisit them after. As you delete them, you could ask "does this do this in the most elegant and legible way possible?" "Is there another comment I might replace this with that will explain why this is necessary?" "What would I think is the most useful thing to express to future me or other from another mother?"

This is OK to refactor

Have you ever had a really aggressive product deadline? Perhaps you implemented a feature that you yourself disagreed with, or they told you it was "temporary" and "just an AB test so it doesn't matter". *Cue horror music* … and then it lived on… forever…

As embarrassing as it might be, writing comments like

// this isn't my best work, we had to get it in by the deadline

is rather helpful. As a maintainer, when I run across comments like this, I'll save buckets of time trying to figure out what the heck is wrong with this person and envisioning ways I could sabotage their morning commute. I'll immediately stop trying to figure out what parts of this code I should preserve and instead focus on what can be refactored. The only warning I'll give is to try not to make this type of coding your fallback (we'll discuss this in detail further on).

Commenting as a teaching tool

Are you a PHP shop that just was given a client that's all Ruby? Maybe it's totally standard Ruby but your team is in slightly over their heads. Are you writing a tutorial for someone? These are the limited examples for when writing out the how can be helpful. The person is literally learning on the spot and might not be able to just infer what it's doing because they've never seen it before in their lives. Comment that sh*t. Learning is humbling enough without them having to ask you aloud what they could more easily learn on their own.

I StackOverflow'd the bejeezus outta this

Did you just copy paste a whole block of code from Stack Overflow and modify it to fit your needs? This isn't a great practice but we've all been there. Something I've done that's saved me in the past is to put the link to the post where I found it. But! Then we won't get credit for that code! You might say. You're optimizing for the wrong thing would be my answer.

Inevitably people have different coding styles and the author of the solution solved a problem in a different way than you would if you knew the area deeper. Why does this matter? Because later, you might be smarter. You might level up in this area and then you'll spend less time scratching your head at why you wrote it that way, or learn from the other person's approach. Plus, you can always look back at the post, and see if any new replies came in that shed more light on the subject. There might even be another, better answer later.

Bad Comments

Writing comments gets a bad wrap sometimes, and that's because bad comments do indeed exist. Let's talk about some things to avoid while writing them.

They just say what it's already doing

John Papa made the accurate joke that this:

// if foo equals bar ... If (foo === bar) { } // end if

is a big pain. Why? Because you're actually reading everything twice in two different ways. It gives no more information, in fact, it makes you have to process things in two different formats, which is mental overhead rather than helpful. We've all written comments like this. Perhaps because we didn't understand it well enough ourselves or we were overly worried about reading it later. For whatever the reason, it's always good to take a step back and try to look at the code and comment from the perspective of someone reading it rather than you as the author, if you can.

It wasn't maintained

Bad documentation can be worse than no documentation. There's nothing more frustrating than coming across a block of code where the comment says something completely different than what's expressed below. Worse than time-wasting, it's misleading.

One solution to this is making sure that whatever code you are updating, you're maintaining the comments as well. And certainly having less and only more meaningful comments makes this upkeep less arduous. But commenting and maintaining comments are all part of an engineer's job. The comment is in your code, it is your job to work on it, even if it means deleting it.

If your comments are of good quality to begin with, and express why and not the how, you may find that this problem takes care of itself. For instance, if I write

// we need to FLIP this animation to be more performant in every browser

and refactor this code later to go from using getBoundingClientRect() to getBBox(), the comment still applies. The function exists for the same reason, but the details of how are what has changed.

You could have used a better name

I've definitely seen people write code (or done this myself) where the variable or functions names are one letter, and then comment what the thing is. This is a waste. We all hate typing, but if you are using a variable or function name repeatedly, I don't want to scan up the whole document where you explained what the name itself could do. I get it, naming is hard. But some comments take the place of something that could easily be written more precisely.

The comments are an excuse for not writing the code better to begin with

This is the crux of the issue for a lot of people. If you are writing code that is haphazard, and leaning back on your comments to clarify, this means the comments are holding back your programming. This is a horse-behind-the-cart kind of scenario. Unfortunately, even as the author it's not so easy to determine which is which.

We lie to ourselves in myriad ways. We might spend the time writing a comment that could be better spent making the code cleaner to begin with. We might also tell ourselves we don't need to comment our code because our code is well-written, even if other people might not agree.

There are lazy crutches in both directions. Just do your best. Try not to rely on just one correct way and instead write your code, and then read it. Try to envision you are both the author and maintainer, or how that code might look to a younger you. What information would you need to be as productive as possible?

People tend to, lately, get on one side or the other of "whether you should write comments", but I would argue that that conversation is not nuanced enough. Hopefully opening the floor to a deeper conversation about how to write meaningful comments bridges the gap.

Even so, it can be a lot to parse. Haha get it? Anyways, I'll leave you with some (better) humor. A while back there was a Stack Overflow post about the best comments people have written or seen. You can definitely waste some time in here. Pretty funny stuff.

The Art of Comments is a post from CSS-Tricks

Getting Nowhere on Job Titles

Mon, 10/16/2017 - 07:37

Last week on ShopTalk, Dave and I spoke with Mandy Michael and Lara Schenck. Mandy had just written the intentionally provocative "Is there any value in people who cannot write JavaScript?" which guided our conversation. Lara is deeply interested in this subject as well, as someone who is a job seeking web worker, but places herself on the spectrum as a non-unicorn.

Part of that discussion was about job titles. If there was a ubiquitously accepted and used job title that meant you were specifically skilled at HTML and CSS, and there was a market for that job title, there probably wouldn't be any problem at all. There isn't though. "Web developer" is too vague. "Front-end developer" maybe used to mean that, but has been largely co-opted by JavaScript.

In fact, you might say that none of us has an exactly perfect job title and the industry at large has trouble agreeing on a set of job titles.

Lara created a repo with the intent to think all this out and discuss it.

If there is already a spectrum between design and backend development, and front-end development is that place in between, perhaps front-end development, if we zoon in, is a spectrum as well:

I like the idea of spectrums, but I also agree with a comment by Sarah Drasner where she mentioned that this makes it seem like you can't be good at both. If you're a dot right in the middle in this specrum, you are, for example, not as good at JavaScript as someone on the right.

This could probably be fixed with some different dataviz (perhaps the size of the dot), or, heaven forbid, skill-level bars.

More importantly, if you're really interested in the discussion around all this, Lara has used the issues area to open that up.

Last year, Geoff also started thinking about all our web jobs as a spectrum. We can break up our jobs into parts and map them onto those parts in differnet ways:

See the Pen Web Terminology Matrix by Geoff Graham (@geoffgraham) on CodePen.

See the Pen Web Terminology Venn Diagram by Geoff Graham (@geoffgraham) on CodePen.

That can certainly help us understand our world a little bit, but doesn't quite help with the job titles thing. It's unlikely we'll get people to write job descriptions that include a data visualization of what they are looking for.

Jeff Pelletier took a crack at job titles and narrowed it down to three:

Front-end Implementation (responsive web design, modular/scalable CSS, UI frameworks, living style guides, progressive enhancement & accessibility, animation and front-end performance).

Application Development (JavaScript frameworks, JavaScript preprocessors, code quality, process automation, testing).

Front-end Operations (build tools, deployment, speed: (app, tests, builds, deploys), monitoring errors/logs, and stability).

Although those don't quite feel like titles to me and converting them into something like "Front-end implementation developer" doesn't seem like something that will catch on.

Cody Lindley's Front-End Developer Handbook has a section on job titles. I won't quote it in full, but they are:

  • Front-End Developer
  • Front-End Engineer (aka JavaScript Developer or Full-stack JavaScript Developer)
  • CSS/HTML Developer
  • Front-End Web Designer
  • Web/Front-End User Interface (aka UI) Developer/Engineer
  • Mobile/Tablet Front-End Developer
  • Front-End SEO Expert
  • Front-End Accessibility Expert
  • Front-End Dev. Ops
  • Front-End Testing/QA

Note the contentious "full stack" title, in which Brad Frost says:

In my experience, “full-stack developers” always translates to “programmers who can do frontend code because they have to and it’s ‘easy’.” It’s never the other way around.

Still, these largely feel pretty good to me. And yet weirdly, almost like there is both too many and too few. As in, while there is good coverage here, but if you are going to cover specialties, you might as well add in performance, copywriting, analytics, and more as well. The more you add, the further away we are to locking things down. Not to mention the harder it becomes when people crossover these disciplines, like they almost always do.

Oh well.

Getting Nowhere on Job Titles is a post from CSS-Tricks

A Bit on Buttons

Sat, 10/14/2017 - 14:46

The other day we published an article with a bonafide CSS trick where an element with a double border could look like a pause icon, and morph nicely into a CSS triangle looking like a play icon. It was originally published with a <div> being the demo element, which was a total accessibility flub on our part, as something intended to be interacted with like this is really a <button>.

It also included a demo using the checkbox hack to toggle the state of the button. That changes the keyboard interaction from a "return" click to a "space bar" toggle, but more importantly should have had a :focus state to indicate the button (actually a label) was interactive at all.

Both have been fixed.


Adam Silver has an interesting post where the title does a good job of setting up the issue:

But sometimes links look like buttons (and buttons look like links)

Buttons that are buttons aren't contentious (e.g. a form submit button). Links that are links aren't contentious. The trouble comes in when we cross the streams.

Buttons (that have type="button") are not submit buttons. Buttons are used to create features that rely on Javascript. Behaviours such as revealing a menu or showing a date picker.

A call-to-action "button" is his good example on the other side. They are often just links that are styled like a button for prominence. This whole passage is important:

In Resilient Web Design Jeremy Keith discusses the idea of material honesty. He says that “one material should not be used as a substitute for another, otherwise the end result is deceptive”.

Making a link look like a button is materially dishonest. It tells users that links and buttons are the same when they’re not.

In Buttons In Design Systems Nathan Curtis says that we should distinguish links from buttons because “button behaviours bring a whole host of distinct considerations from your simple anchor tag”.

For example, we can open a link in a new tab, copy the address or bookmark it for later. All of which we can’t do with buttons.

Call to action buttons— which again, are just links — are deceptive. Users are blissfully unaware because this styling removes their natural affordance, obscuring their behaviour.

We could make call to action buttons look like regular links. But this makes them visually weak which negates their prominence. Hence the problem.

I find even amongst <button>s you can have issues, since what those buttons do are often quite different. For example, the Fork button on CodePen takes you to a brand new page with a new copy of a Pen, which feels a bit like clicking a link. But it's not a link, which means it behaves differently and requires explanation.


I'll repeat Adam again here:

Buttons are used to create features that rely on Javascript.

Buttons within a <form> have functionality without JavaScript, but that is the only place.

Meaning, a <button> is entirely useless in HTML unless JavaScript is successfully downloaded and executed.

Taken to an extreme logical conclusion, you should never use a <button> (or type="button") in HTML outside of a form. Since JavaScript is required for the button to do anything, you should inject the button into place with JavaScript once it's functionality is already ready to go.

Or if that's not possible...

<button disabled title="This button will become functional once JavaScript is downloaded and executed"> Do Thing </button>

Then change those attributes once ready.

A Bit on Buttons is a post from CSS-Tricks

Writing Smarter Animation Code

Fri, 10/13/2017 - 15:02

If you've ever coded an animation that's longer than 10 seconds with dozens or even hundreds of choreographed elements, you know how challenging it can be to avoid the dreaded "wall of code". Worse yet, editing an animation that was built by someone else (or even yourself 2 months ago) can be nightmarish.

In these videos, I'll show you the techniques that the pros use keep their code clean, manageable, and easy to revise. Scripted animation provides you the opportunity to create animations that are incredibly dynamic and flexible. My goal is for you to have fun without getting bogged down by the process.

We'll be using GSAP for all the animation. If you haven't used it yet, you'll quickly see why it's so popular - the workflow benefits are substantial.

See the Pen SVG Wars: May the morph be with you. (Craig Roblewsky) on CodePen.

The demo above from Craig Roblewsky is a great example of the types of complex animations I want to help you build.

This article is intended for those who have a basic understanding of GSAP and want to approach their code in a smarter, more efficient way. However, even if you haven't used GSAP, or prefer another animation tool, I think you'll be intrigued by these solutions to some of the common problems that all animators face. Sit back, watch and enjoy!

Video 1: Overview of the techniques

The video below will give you a quick behind-the-scenes look at how Craig structured his code in the SVG Wars animation and the many benefits of these workflow strategies.

Although this is a detailed and complex animation, the code is surprisingly easy to work with. It's written using the same approach that we at GreenSock use for any animation longer than a few seconds. The secret to this technique is two-fold:

  1. Break your animation into smaller timelines that get glued together in a master (parent) timeline.
  2. Use functions to create and return those smaller timelines.

This makes your code modular and easy to edit.

Video 2: Detailed Example

I'll show you exactly how to build a sequence using functions that create and return timelines. You'll see how packing everything into one big timeline (no modular nesting) results in the intimidating "Wall of Code". I'll then break the animation down into separate timelines and use a parameterized function that does all the heavy lifting with 60% less code!

Let's review the key points...

Avoid the dreaded wall of code

A common strategy (especially for beginners) is to create one big timeline containing all of the animation code. Although a timeline offers tons of features that accommodate this style of coding, it's just a basic reality of any programming endeavor that too much code in one place will become unwieldy.

Let's upgrade the code so that we can apply the same techniques Craig used in the SVG wars animation...

See the Pen Wall of Code on CodePen.

Be sure to investigate the code in the "JS" tab. Even for something this simple, the code can be hard to scan and edit, especially for someone new to the project. Imagine if that timeline had 100 lines. Mentally parsing it all can be a chore.

Create a separate timeline for each panel

By separating the animation for each panel into its own timeline, the code becomes easier to read and edit.

var panel1 = new TimelineLite(); panel1.from(...); ... var panel2 = new TimelineLite(); panel2.from(...); ... var panel3 = new TimelineLite(); panel3.from(...); ...

Now it's much easier to do a quick scan and find the code for panel2. However, when these timelines are created they will all play instantly, but we want them sequenced.

See the Pen

No problem - just nest them in a parent timeline in whatever order we want.

Nest each timeline using add()

One of the greatest features of GSAP's timeline tools (TimelineLite / TimelineMax) is the ability to nest animations as deeply as you want (place timelines inside of other timelines).

The add() method allows you add any tween, timeline, label or callback anywhere in a timeline. By default, things are placed at the end of the timeline which is perfect for sequencing. In order to schedule these 3 timelines to run in succession we will add each of them to a master timeline like so:

//create a new parent timeline var master = new TimelineMax(); //add child timelines master.add(panel1) .add(panel2) .add(panel3);

Demo with all code for this stage:

See the Pen

The animation looks the same, but the code is much more refined and easy to parse mentally.
Some key benefits of nesting timelines are that you can:

  • Scan the code more easily.
  • Change the order of sections by just moving the add() code.
  • Change the speed of an individual timeline.
  • Make one section repeat multiple times.
  • Have precise control over the placement of each timeline using the position parameter (beyond the scope of this article).
Use functions to create and return timelines

The last step in optimizing this code is to create a function that generates the animations for each panel. Functions are inherently powerful in that they:

  • Can be called many times.
  • Can be parameterized in order to vary the animations they build.
  • Allow you to define local variables that won't conflict with other code.

Since each panel is built using the same HTML structure and the same animation style, there is a lot of repetitive code that we can eliminate by using a function to create the timelines. Simply tell that function which panel to operate on and it will do the rest.

Our function takes in a single panel parameter that is used in the selector string for all the tweens in the timeline:

function createPanel(panel) { var tl = new TimelineLite(); tl.from(panel + " .bg", 0.4, {scale:0, ease:Power1.easeInOut}) .from(panel + " .bg", 0.3, {rotation:90, ease:Power1.easeInOut}, 0) .staggerFrom(panel + " .text span", 1.1, {y:-50, opacity:0, ease:Elastic.easeOut}, 0.06) .addLabel("out", "+=1") .staggerTo(panel + " .text span", 0.3, {opacity:0, y:50, ease:Power1.easeIn}, -0.06, "out") .to(panel + " .bg", 0.4, {scale:0, rotation:-90, ease:Power1.easeInOut}); return tl; //very important that the timeline gets returned }

We can then build a sequence out of all the timelines by placing each one in a parent timeline using add().

var master = new TimelineMax(); master.add(createPanel(".panel1")) .add(createPanel(".panel2")) .add(createPanel(".panel3"));

Completed demo with full code:

See the Pen

This animation was purposefully designed to be relatively simple and use one function that could do all the heavy lifting. Your real-world projects may have more variance but even if each child animation is unique, I still recommend using functions to create each section of your complex animations.

Check out this example in the wonderful pen from Sarah Drasner that's built using functions that return timelines to illustrate how to do exactly that!

See the Pen

And of course the same technique is used on the main GSAP page animation:

See the Pen


You may have noticed that fancy timeline controller used in some of the demos and the videos. GSDevTools was designed to super-charge your workflow by allowing you to quickly navigate and control any GSAP tween or timeline. To find out more about GSDevTools visit greensock.com/GSDevTools.


Next time you've got a moderately complex animation project, try these techniques and see how much more fun it is and how quickly you can experiment. Your coworkers will sing your praises when they need to edit one of your animations. Once you get the hang of modularizing your code and tapping into GSAP's advanced capabilities, it'll probably open up a whole new world of possibilities. Don't forget to use functions to handle repetitive tasks.

As with all projects, you'll probably have a client or art director ask:

  • "Can you slow the whole thing down a bit?"
  • "Can you take that 10-second part in the middle and move it to the end?"
  • "Can you speed up the end and make it loop a few times?"
  • "Can you jump to that part at the end so I can check the copy?"
  • "Can we add this new, stupid idea I just thought of in the middle?"

Previously, these requests would trigger a panic attack and put the entire project at risk, but now you can simply say "gimme 2 seconds..."

Additional Resources

To find out more about GSAP and what it can do, check out the following links:

CSS-Tricks readers can use the coupon code CSS-Tricks for 25% off a Club GreenSock membership which gets you a bunch of extras like MorphSVG and GSDevTools (referenced in this article). Valid through 11/14/2017.

Writing Smarter Animation Code is a post from CSS-Tricks

CSS-Tricks Chronicle XXXII

Fri, 10/13/2017 - 14:28

Hey y'all! Time for a quick Chronicle post where I get to touch on and link up some of the happenings around the site that I haven't gotten to elsewhere.

Technologically around here, there have been a few small-but-interesting changes.

Site search is and has been powered by Algolia the last few months. I started up writing some thoughts about that here, and it got long enough I figured I'd crack it off into it's own blog post, so look forward to that soon.

Another service I've started making use of is Cloudinary. Cloudinary is an image CDN, so it's serving most of the image assets here now, and we're squeezing as much performance out of that as we possibly can. Similar to Algolia, it has a WordPress plugin that does a lot of the heavy lifting. We're still working out some kinks as well. If you're interested in how that all goes down, Eric Portis and I did a screencast about it not too long ago.

We hit that big 10-year milestone not too long ago. It feels both like heck yes and like just another year, in the sense that trucking right along is what we do best.

We still have plenty of nerdy shirts (free shipping) I printed up to sorta celebrate that anniversary, but still be generic and fun.

As I type, I'm sitting in New Orleans after CSS Dev Conf just wrapped up. Well, a day after that, because after such an amazing and immersive event, and a full day workshop where I talk all day long, I needed to fall into what my wife calls "an introvert hole" for an entire day of recovery.

From here, I fly to Barcelona for Smashing Conf which is October 17-18.

The last two conferences for me this year will be An Event Apart San Francisco in late October and Denver in mid-December.

Next year will be much lighter on conference travel. Between having a daughter on the way, wanting more time at home, and desiring a break, I won't be on the circuit too much next year. Definitely a few though, and I do have at least one big fun surprise to talk about soon.

CodePen has been hard at work, as ever. Sometimes our releases are new public features, like the new Dashboard. Sometimes the work is mostly internal. For example, we undertook a major rewriting of our payment system so that we could be much more flexible in how we structure plans and what payment providers we could use. For example, we now use Braintree in addition to Stripe, so that we could make PayPal a first-class checkout citizen like many users expect.

It's the same story as I write. We're working on big projects some of which users will see and directly be able to use, and some of which are infrastructural that make CodePen better from the other side.

Did you know the CSS-Tricks Job Board is powered by the CodePen Job Board? Post in one place, it goes to both. Plus, if you just wanna try it out and see if it's effective for your company, it's free.

We don't really have official "seasons" on ShopTalk, but sometimes we think of it that way. As this year approaches a close, we know we'll be taking at least a few weeks off, making somewhat of a seasonal break.

Our format somewhat slowly morphs over time, but we still often have guests and still answer questions, the heart of ShopTalk Show. Our loose plan moving forward is to be even more flexible with the format, with more experimental shows and unusual guests. After all, the show is on such a niche topic anyway (in the grand scheme of things) that we don't plan to change, we might as well have the flexibility to do interesting things that still circle around, educate, and entertain around web design and development.

I've gotten to be a guest on some podcasts recently!

I also got to do a written interview with Steve Domino for Nanobox, The Art of Development. Plus, Sparkbox wrote up a recap of my recent workshop there, Maker Series Recap: Chris Coyier.

Personally, I've completed my move out to Bend, Oregon! I'm loving Bend so far and look forward to calling it home for many years to come. For the first time ever, I have my own office. Well, it's a shared room in a shared office, but we all went in on it together and it's ours. We're moved in and decking it out over the coming months and it's been fun and feels good.

CSS-Tricks Chronicle XXXII is a post from CSS-Tricks

Let There Be Peace on CSS

Fri, 10/13/2017 - 14:16

Cristiano Rastelli:

In the last few months there’s been a growing friction between those who see CSS as an untouchable layer in the “separation of concerns” paradigm, and those who have simply ignored this golden rule and have found different ways to style the UI, typically applying CSS styles via JavaScript.

He does a great job of framing the "problem", exploring the history, and pointing to things that make this seem rather war-like, including one of my own!

As Cristiano also makes clear that it's not so much a war but a young community still figuring out things, solving problems for ourselves, and zigzagging through time waiting for this to shake out.

So, here are my suggestions:

  1. Embrace the ever-changing nature of the web.
  2. Be careful with your words: they can hurt.
  3. Be pragmatic, non dogmatic. But most of all, be curious.

Direct Link to ArticlePermalink

Let There Be Peace on CSS is a post from CSS-Tricks

You can get pretty far in making a slider with just HTML and CSS

Thu, 10/12/2017 - 13:54

A "slider", as in, a bunch of boxes set in a row that you can navigate between. You know what a slider is. There are loads of features you may want in a slider. Just as one example, you might want the slider to be swiped or scrolled. Or, you might not want that, and to have the slider only respond to click or tappable buttons that navigate to slides. Or you might want both. Or you might want to combine all that with autoplay.

I'm gonna go ahead and say that sliders are complicated enough of a UI component that it's use JavaScript territory. Flickity being a fine example. I'd also say that you can get pretty far with a nice looking functional slider with HTML and CSS alone. Starting that way makes the JavaScript easier and, dare I say, a decent example of progressive enhancement.

Let's consider the semantic markup first.

A bunch of boxes is probably as simple as:

<div class="slider"> <div class="slide" id="slide-1"></div> <div class="slide" id="slide-2"></div> <div class="slide" id="slide-3"></div> <div class="slide" id="slide-4"></div> <div class="slide" id="slide-5"></div> </div> With a handful of lines of CSS, we can set them next to each other and let them scroll. .slider { width: 300px; height: 300px; display: flex; overflow-x: auto; } .slide { width: 300px; flex-shrink: 0; height: 100%; } Might as well make it swipe smoothly on WebKit based mobile browsers. .slider { ... -webkit-overflow-scrolling: touch; }

We can do even better!

Let's have each slide snap into place with snap-points. .slider { ... -webkit-scroll-snap-points-x: repeat(300px); -ms-scroll-snap-points-x: repeat(300px); scroll-snap-points-x: repeat(300px); -webkit-scroll-snap-type: mandatory; -ms-scroll-snap-type: mandatory; scroll-snap-type: mandatory; }

Look how much nicer it is now:

Jump links

A slider probably has a little UI to jump to a specific slide, so let's do that semantically as well, with anchor links that jump to the correct slide:

<div class="slide-wrap"> <a href="#slide-1">1</a> <a href="#slide-2">2</a> <a href="#slide-3">3</a> <a href="#slide-4">4</a> <a href="#slide-5">5</a> <div class="slider"> <div class="slide" id="slide-1">1</div> <div class="slide" id="slide-2">2</div> <div class="slide" id="slide-3">3</div> <div class="slide" id="slide-4">4</div> <div class="slide" id="slide-5">5</div> </div> </div>

Anchor links that actually behave as a link to related content and semantic and accessible so no problems there (feel free to correct me if I'm wrong).

Let's style thing up a little bit... and we got some buttons that do their job:

On both desktop and mobile, we can still make sure we get smooth sliding action, too! .slides { ... scroll-behavior: smooth; } Maybe we'd only display the buttons in situations without nice snappy swiping?

If the browser supports scroll-snap-type, it's got nice snappy swiping. We could just hide the buttons if we wanted to:

@supports (scroll-snap-type) { .slider > a { display: none; } } Need to do something special to the "active" slide?

We could use :target for that. When one of the buttons to navigate slides is clicked, the URL changes to that #hash, and that's when :target takes effect. So:

.slides > div:target { transform: scale(0.8); }

There is a way to build this slide with the checkbox hack as well, and still to "active slide" stuff with :checked, but you might argue that's a bit less semantic and accessible.

Here's where we are so far.

See the Pen Real Simple Slider by Chris Coyier (@chriscoyier) on CodePen.

This is where things break down a little bit.

Using :target is a neat trick, but it doesn't work, for example, when the page loads without a hash. Or if the user scrolls or flicks on their own without using the buttons. I both don't think there is any way around this with just HTML and CSS, nor do I think that's entirely a failure of HTML and CSS. It's just the kind of thing Javascript is for.

JavaScript can figure out what the active slide is. JavaScript can set the active slide. Probably worth looking into the Intersection Observer API.

What are more limitations?

We've about tapped out what HTML and CSS alone can do here.

  • Want to be able to flick with a mouse? That's not a mouse behavior, so you'll need to do all that with DOM events. Any kind of exotic interactive behavior (e.g. physics) will require JavaScript. Although there is a weird trick for flipping vertical scrolling for horizontal.
  • Want to know when a slide is changed? Like a callback? That's JavaScript territory.
  • Need autoplay? You might be able to do something rudimentary with a checkbox, :checked, and controlling the animation-play-state of a @keyframes animation, but it will feel limited and janky.
  • Want to have it infinitely scroll in one direction, repeating as needed? That's going to require cloning and moving stuff around in the DOM. Or perhaps some gross misuse of <marquee>.

I'll leave you with those. My point is only that there is a lot you can do before you need JavaScript. Starting with that strong of a base might be a way to go that provides a happy fallback, regardless of what you do on top of it.

You can get pretty far in making a slider with just HTML and CSS is a post from CSS-Tricks


Thu, 10/12/2017 - 13:43

(This is a sponsored post.)

When asked "Why Wufoo?" they say:

Because you’re busy and want your form up and running yesterday.

Wufoo is a form builder that not only makes it fast and easy to build a form so you really can get it up and running in just minutes, but also has all the power you need. What makes forms hard are things like preventing spam, adding logic, making them mobile friendly, and integrating what you collect with other services. Wufoo also makes that stuff easy. If your at least curious, head over there and browse the template or play with the demo form builder.

Direct Link to ArticlePermalink

Wufoo is a post from CSS-Tricks

Exploring Data with Serverless and Vue: Filtering and Using the Data

Wed, 10/11/2017 - 13:42

In this second article of this tutorial, we'll take the data we got from our serverless function and use Vue and Vuex to disseminate the data, update our table, and modify the data to use in our WebGL globe. This article assumes some base knowledge of Vue. By far the coolest/most useful thing we'll address in this article is the use of the computed properties in Vue.js to create the performant filtering of the table. Read on!

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

You can check out the live demo here, or explore the code on GitHub.

First, we'll spin up an entire Vue app with server-side rendering, routing, and code-splitting with a tool called Nuxt. (This is similar to Zeit's Next.js for React). If you don't already have the Vue CLI tool installed, run

npm install -g vue-cli # or yarn global add vue-cli

This installs the Vue CLI globally so that we can use it whenever we wish. Then we'll run:

vue init nuxt/starter my-project cd my-project yarn

That creates this application in particular. Now we can kick off our local dev server with:

npm run dev

If you're not already familiar with Vuex, it's similar to React's Redux. There's more in depth information on what it is and does in this article here.

import Vuex from 'vuex'; import speakerData from './../assets/cda-data.json'; const createStore = () => { return new Vuex.Store({ state: { speakingColumns: ['Name', 'Conference', 'From', 'To', 'Location'], speakerData } }); }; export default createStore;

Here, we're pulling the speaker data from our `cda.json` file that has now been updated with latitude and longitude from our Serverless function. As we import it, we're going to store it in our state so that we have application-wide access to it. You may also notice that now that we've updated the JSON with our Serverless function, the columns no longer correspond to what we're want to use in our table. That's fine! We'll store only the columns we need as well to use to create the table.

Now in the pages directory of our app, we'll have an `Index.vue` file. If we wanted more pages, we would merely need to add them to this directory. We're going to use this index page for now and use a couple of components in our template.

<template> <section> <h1>Cloud Developer Advocate Speaking</h1> <h3>Microsoft Azure</h3> <div class="tablecontain"> ... <speaking-table></speaking-table> </div> <more-info></more-info> <speaking-globe></speaking-globe> </section> </template>

We're going to bring all of our data in from the Vuex store, and we'll use a computed property for this. We'll also create a way to filter that data in a computed property here as well. We'll end up passing that filtered property to both the speaking table and the speaking globe.

computed: { speakerData() { return this.$store.state.speakerData; }, columns() { return this.$store.state.speakingColumns; }, filteredData() { const x = this.selectedFilter, filter = new RegExp(this.filteredText, 'i') return this.speakerData.filter(el => { if (el[x] !== undefined) { return el[x].match(filter) } else return true; }) } } }</script>

You'll note that we're using the names of the computed properties, even in other computed properties, the same way that we use data- i.e. speakerData() becomes this.speakerData in the filter. It would also be available to us as {{ speakerData }} in our template and so forth. This is how they are used. Quickly sorting and filtering a lot of data in a table based on user input, is definitely a job for computed properties. In this filter, we'll also check and make sure we're not throwing things out for case-sensitivity, or trying to match up a row that's undefined as our data sometimes has holes in it.

Here's an important part to understand, because computed properties in Vue are incredibly useful. They are calculations that will be cached based on their dependencies and will only update when needed. This means they're extremely performant when used well. Computed properties aren't used like methods, though at first, they might look similar. We may register them in the same way, typically with some accompanying logic, they're actually used more like data. You can consider them another view into your data.

Computed values are very valuable for manipulating data that already exists. Anytime you're building something where you need to sort through a large group of data, and you don't want to rerun those calculations on every keystroke, think about using a computed value. Another good candidate would be when you're getting information from your Vuex store. You'd be able to gather that data and cache it.

Creating the inputs

Now, we want to allow the user to pick which type of data they are going to filter. In order to use that computed property to filter based on user input, we can create a value as an empty string in our data, and use v-model to establish a relationship between what is typed in this search box with the data we want filtered in that filteredData function from earlier. We'd also like them to be able to pick a category to narrow down their search. In our case, we already have access to these categories, they are the same as the columns we used for the table. So we can create a select with a corresponding label:

<label for="filterLabel">Filter By</label> <select id="filterLabel" name="select" v-model="selectedFilter"> <option v-for="column in columns" key="column" :value="column"> {{ column }} </option> </select>

We'll also wrap that extra filter input in a v-if directive, because it should only be available to the user if they have already selected a column:

<span v-if="selectedFilter"> <label for="filterText" class="hidden">{{ selectedFilter }}</label> <input id="filteredText" type="text" name="textfield" v-model="filteredText"></input> </span> Creating the table

Now, we'll pass the filtered data down to the speaking table and speaking globe:

<speaking-globe :filteredData="filteredData"></speaking-globe>

Which makes it available for us to update our table very quickly. We can also make good use of directives to keep our table small, declarative, and legible.

<table class="scroll"> <thead> <tr> <th v-for="key in columns"> {{ key }} </th> </tr> </thead> <tbody> <tr v-for="(post, i) in filteredData"> <td v-for="entry in columns"> <a :href="post.Link" target="_blank"> {{ post[entry] }} </a> </td> </tr> </tbody> </table>

Since we're using that computed property we passed down that's being updated from the input, it will take this other view of the data and use that instead, and will only update if the data is somehow changed, which will be pretty rare.

And now we have a performant way to scan through a lot of data on a table with Vue. The directives and computed properties are the heroes here, making it very easy to write this declaratively.

I love how fast it filters the information with very little effort on our part. Computed properties leverage Vue's ability to cache wonderfully.

Creating the Globe Visualization

As mentioned previously, I'm using a library from Google dataarts for the globe, found in this repo.

The globe is beautiful out of the box but we need two things in order to work with it: we need to modify our data to create the JSON that the globe expects, and we need to know enough about three.js to update its appearance and make it work in Vue.

It's an older repo, so it's not available to install as an npm module, which is actually just fine in our case, because we're going to manipulate the way it looks a bit because I'm a control freak ahem I mean, we'd like to play with it to make it our own.

Dumping all of this repo's contents into a method isn't that clean though, so I'm going to make use of a mixin. The mixin allows us to do two things: it keeps our code modular so that we're not scanning through a giant file, and it allows us to reuse this globe if we ever wanted to put it on another page in our app.

I register the globe like this:

import * as THREE from 'three'; import { createGlobe } from './../mixins/createGlobe'; export default { mixins: [createGlobe], … }

and create a separate file in a directory called mixins (in case I'd like to make more mixins) named `createGlobe.js`. For more information on mixins and how they work and what they do, check out this other article I wrote on how to work with them.

Modifying the data

If you recall from the first article, in order to create the globe, we need feed it values that look like this:

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

So far, the filteredData computed value we're returning from our store will give us our latitude and longitude for each entry, because we got that information from our computed property. For now we just want one view of that dataset, just my team's data, but in the future we might want to collect information from other teams as well so we should build it out to add new values fairly easily.

Let's make another computed value that returns the data the way that we need it. We're going to make it as an object first because that will be more efficient while we're building it, and then we'll create an array.

teamArr() { //create it as an object first because that's more efficient than an array var endUnit = {}; //our logic to build the data will go here //we'll turn it into an array here let x = Object.entries(endUnit); let area = [], places, all; for (let i = 0; i < x.length; i++) { [all, places] = x[i]; area.push([all, [].concat(...Object.values(places))]); } return area; }

In the object we just created, we'll see if our values exist already, and if not, we'll create a new one. We'll also have to create a key from the latitude and longitude put together so that we can check for repeat instances. This is particularly helpful because I don't know if my teammates will put the location in as just the city or the city and the state. Google maps API is pretty forgiving in this way- they'll be able to find one consistent location for either string.

We'll also decide what the smallest and incremental value of the magnification will be. Our decision for the magnification will mainly be from trial and error of adjusting this value and seeing what fits in a way that makes sense for the viewer. My first try here was long stringy wobbly poles and looked like a balding broken porcupine, it took a minute or so to find a value that worked.

this.speakerData.forEach(function(index) { let lat = index.Latitude, long = index.Longitude, key = lat + ", " + long, magBase = 0.1, val = 'Microsoft CDAs'; //if we either the latitude or longitude are missing, skip it if (lat === undefined || long === undefined) return; //because the pins are grouped together by magnitude, as we build out the data, we need to check if one exists or increment the value if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { //we'll increase the maginifation here } } else { //we'll create the new values here } })

Now, we'll check if the location already exists, and if it does, we'll increment it. If not, we'll create new values for them.

this.speakerData.forEach(function(index) { ... if (val in endUnit) { //if we already have this location (stored together as key) let's increment it if (key in endUnit[val]) { endUnit[val][key][2] += magBase; } else { endUnit[val][key] = [lat, long, magBase]; } } else { let y = {}; y[key] = [lat, long, magBase]; endUnit[val] = y; } }) Make it look interesting

I mentioned earlier that part of the reason we'd want to store the base dataarts JavaScript in a mixin is that we'd want to make some modifications to its appearance. Let's talk about that for a minute as well because it's an aspect of any interesting data visualization.

If you don't know very much about working with three.js, it's a library that's pretty well documented and has a lot of examples to work off of. The real breakthrough in my understanding of what it was and how to work with it didn't really come from either of these sources, though. I got a lot out of Rachel Smith's series on codepen and Chris Gammon's (not to be confused with Chris Gannon) excellent YouTube series. If you don't know much about three.js and would like to use it for 3D data visualization, my suggestion is to start there.

The first thing we'll do is adjust the colors of the pins on the globe. The ones out of the box are beautiful, but they don't fit the style of our page, or the magnification we need for this data. The code to update is on line 11 of our mixin:

const colorFn = opts.colorFn || function(x) { let c = new THREE.Color(); c.setHSL(0.1 - x * 0.19, 1.0, 0.6); return c; };

If you're not familiar with it, HSL is a wonderfully human-readable color format, which makes it easy to update the colors of our pins on a range:

  • H stands for hue, which is given to us as a circle. This is great for generative projects like this because unlike a lot of other color formats, it will never fail. 20 degrees will give us the same value as 380 degrees, and so on. The x that we pass in here have a relationship with our magnification, so we'll want to figure out where that range begins, and what it will increase by.
  • The second value will be Saturation, which we'll pump up to full blast here so that it will stand out- on a range from 0 to 1, 1.0 is the highest.
  • The third value is Lightness. Like Saturation, we'll get a value from 0 to 1, and we'll use this halfway at 0.5.

You can see if I just made a slight modification, to that one line of code to c.setHSL(0.6 - x * 0.7, 1.0, 0.4); it would change the color range dramatically.

We'll also make some other fine-tuned adjustments: the globe will be a circle, but it will use an image for the texture. If we wanted to change that shape to a a icosahedron or even a torus knot, we could do so, we'd need only to change one line of code here:

//from const geometry = new THREE.SphereGeometry(200, 40, 30); //to const geometry = new THREE.IcosahedronGeometry(200, 0);

and we'd get something like this, you can see that the texture will still even map to this new shape:

Strange and cool, and maybe not useful in this instance, but it's really nice that creating a three-dimensional shape is so easy to update with three.js. Custom shapes get a bit more complex, though.

We load that texture differently in Vue than the way the library would- we'll need to get it as the component is mounted and load it in, passing it in as a parameter when we also instantiate the globe. You'll notice that we don't have to create a relative path to the assets folder because Nuxt and Webpack will do that for us behind the scenes. We can easily use static image files this way.

mounted() { let earthmap = THREE.ImageUtils.loadTexture('https://cdn.css-tricks.com/world4.jpg'); this.initGlobe(earthmap); }

We'll then apply that texture we passed in here, when we create the material:

uniforms = THREE.UniformsUtils.clone(shader.uniforms); uniforms['texture'].value = imageLoad; material = new THREE.ShaderMaterial({ uniforms: uniforms, vertexShader: shader.vertexShader, fragmentShader: shader.fragmentShader });

There are so many ways we could work with this data and change the way it outputs- we could adjust the white bands around the globe, we could change the shape of the globe with one line of code, we could surround it in particles. The sky's the limit!

And there we have it! We're using a serverless function to interact with the Google Maps API, we're using Nuxt to create the application with Server Side Rendering, we're using computed values in Vue to make that table slick, declarative and performant. Working with all of these technologies can yield really fun exploratory ways to look at data.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions
  2. Filtering and Using the Data (you are here!)

Exploring Data with Serverless and Vue: Filtering and Using the Data is a post from CSS-Tricks

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions

Tue, 10/10/2017 - 13:53

I work on a large team with amazing people like Simona Cotin, John Papa, Jessie Frazelle, Burke Holland, and Paige Bailey. We all speak a lot, as it's part of a developer advocate's job, and we're also frequently asked where we'll be speaking. For the most part, we each manage our own sites where we list all of this speaking, but that's not a very good experience for people trying to explore, so I made a demo that makes it easy to see who's speaking, at which conferences, when, with links to all of this information. Just for fun, I made use of three.js so that you can quickly visualize how many places we're all visiting.

You can check out the live demo here, or explore the code on GitHub.

In this tutorial, I'll run through how we set up the globe by making use of a Serverless function that gets geolocation data from Google for all of our speaking locations. I'll also run through how we're going to use Vuex (which is basically Vue's version of Redux) to store all of this data and output it to the table and globe, and how we'll use computed properties in Vue to make sorting through that table super performant and slick.

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data (coming soon!)
Serverless Functions What the heck?

Recently I tweeted that "Serverless is an actually interesting thing with the most clickbaity title." I'm going to stand by that here and say that the first thing anyone will tell you is that serverless is a misnomer because you're actually still using servers. This is true. So why call it serverless? The promise of serverless is to spend less time setting up and maintaining a server. You're essentially letting the service handle maintenance and scaling for you, and you boil what you need down to functions that state: when this request comes in, run this code. For this reason, sometimes people refer to them as functions as a service, or FaaS.

Is this useful? You bet! I love not having to babysit a server when it's unnecessary, and the payment scales automatically as well, which means you're not paying for anything you're not using.

Is FaaS the right thing to use all the time? Eh, not exactly. It's really useful if you'd like to manage small executions. Serverless functions can retrieve data, they can send email notifications, they can even do things like crop images on the fly. But for anything where you have processes that might hold up resources or a ton of computation, being able to communicate with a server as you normally do might actually be more efficient.

Our demo here is a good example of something we'd want to use serverless for, though. We're mostly just maintaining and updating a single JSON file. We'll have all of our initial speaker data, and we need to get geolocation data from Google to create our globe. We can have it all work triggered with GitHub commits, too. Let's dig in.

Creating the Serverless Function

We're going to start with a big JSON file that I outputted from a spreadsheet of my coworker's speaking engagements. That file has everything I need in order to make the table, but for the globe I'm going to use this webgl-globe from Google data arts that I'll modify. You can see in the readme that eventually I'll format my data to extract the years, but I'll also need the latitude and longitude of every location we're visiting

var data = [ [ 'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ], [ 'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ] ] ];

Eventually, I'll also have to reduce the duplicated instances per year to make the magnitude, but we'll tackle that modification of our data within Vue in the second part of this series.

To get started, if you haven't already, create a free Azure trial account. Then go to the portal: ms.portal.azure.com

Inside, you'll see a sidebar that has a lot of options. At the top it will say new. Click that.

Next, we'll select function app from the list and fill in the new name of our function. This will give us some options. You can see that it will already pick up our resource group, subscription, and create a storage account. It will also use the location data from the resource group so, happily, it's pretty easy to populate, as you can see in the GIF below.

The defaults are probably pretty good for your needs. As you can see in the GIF above, it will autofill most of the fields just from the App name. You may want to change your location based on where most of your traffic is coming from, or from a midpoint (i.e. if you have a lot of traffic both in San Francisco and New York), it might be best to choose a location in the middle of the United States.

The hosting plan can be Consumption (the default) or App Service Plan. I choose Consumption because resources are added or subtracted dynamically, which the magic of this whole serverless thing. If you'd like a higher level of control or detail, you'd probably want the App Service plan, but keep in mind that this means you'll be manually scaling and adding resources, so it's extra work on your part.

You'll be taken to a screen that shows you a lot of information about your function. Check to see that everything is in order, and then click the functions plus sign on the sidebar.

From there you'll be able to pick a template, we're going to page down a bit and pick GitHub Webhook - JavaScript from the options given.

Selecting this will bring you to a page with an `index.js` file. You'll be able to enter code if you like, but they give us some default code to run an initial test to see everything's working properly. Before we create our function, let's first test it out to see that everything looks ok.

We'll hit the save and run buttons at the top, and here's what we get back. You can see the output gives us a comment, we get a status of 200 OK in green, and we get some logs that validate our GitHub webhook successfully triggered.

Pretty nice! Now here's the fun part: let's write our own function.

Writing our First Serverless Function

In our case, we have the location data for all of the speeches, which we need for our table, but in order to make the JSON for our globe, we will need one more bit of data: we need latitude and longitude for all of the speaking events. The JSON file will be read by our Vuex central store, and we can pass out the parts that need to be read to each component.

The file that I used for the serverless function is stored in my github repo, you can explore the whole file here, but let's also walk through it a bit:

The first thing I'll mention is that I've populated these variables with config options for the purposes of this tutorial because I don't want to give you all my private info. I mean, it's great, we're friends and all, but I just met you.

// GitHub configuration is read from process.env let GH_USER = process.env.GH_USER; let GH_KEY = process.env.GH_KEY; let GH_REPO = process.env.GH_REPO; let GH_FILE = process.env.GH_FILE;

In a real world scenario, I could just drop in the data:

// GitHub configuration is read from process.env let GH_USER = sdras;

… and so on. In order to use these environment variables (in case you'd also like to store them and keep them private), you can use them like I did above, and go to your function in the dashboard. There you will see an area called Configured Features. Click application settings and you'll be taken to a page with a table where you can enter this information.

Working with our dataset

First, we'll retrieve the original JSON file from GitHub and decode/parse it. We're going to use a method that gets the file from a GitHub response and base64 encodes it (more information on that here).

module.exports = function(context, data) { // Make the context available globally gContext = context; getGithubJson(githubFilename(), (data, err) => { if (!err) { // No error; base64 decode and JSON parse the data from the Github response let content = JSON.parse( new Buffer(data.content, 'base64').toString('ascii') );

Then we'll retrieve the geo-information for each item in the original data, if it went well, we'll push it back up to GitHub, otherwise, it will error. We'll have two errors: one for a general error, and another for if we get a correct response but there is a geo error, so we can tell them apart. You'll note that we're using gContext.log to output to our portal console.

getGeo(makeIterator(content), (updatedContent, err) => { if (!err) { // we need to base64 encode the JSON to embed it into the PUT (dear god, why) let updatedContentB64 = new Buffer( JSON.stringify(updatedContent, null, 2) ).toString('base64'); let pushData = { path: GH_FILE, message: 'Looked up locations, beep boop.', content: updatedContentB64, sha: data.sha }; putGithubJson(githubFilename(), pushData, err => { context.log('All done!'); context.done(); }); } else { gContext.log('All done with get Geo error: ' + err); context.done(); } }); } else { gContext.log('All done with error: ' + err); context.done(); } }); };

Great! Now, given an array of entries (wrapped in an iterator), we'll walk over each of them and populate the latitude and longitude, using Google Maps API. Note that we also cache locations to try and save some API calls.

function getGeo(itr, cb) { let curr = itr.next(); if (curr.done) { // All done processing- pass the (now-populated) entries to the next callback cb(curr.data); return; } let location = curr.value.Location;

Now let's check the cache to see if we've already looked up this location:

if (location in GEO_CACHE) { gContext.log( 'Cached ' + location + ' -> ' + GEO_CACHE[location].lat + ' ' + GEO_CACHE[location].long ); curr.value.Latitude = GEO_CACHE[location].lat; curr.value.Longitude = GEO_CACHE[location].long; getGeo(itr, cb); return; }

Then if there's nothing found in cache, we'll do a lookup and cache the result, or let ourselves know that we didn't find anything:

getGoogleJson(location, (data, err) => { if (err) { gContext.log('Error on ' + location + ' :' + err); } else { if (data.results.length > 0) { let info = { lat: data.results[0].geometry.location.lat, long: data.results[0].geometry.location.lng }; GEO_CACHE[location] = info; curr.value.Latitude = info.lat; curr.value.Longitude = info.long; gContext.log(location + ' -> ' + info.lat + ' ' + info.long); } else { gContext.log( "Didn't find anything for " + location + ' ::' + JSON.stringify(data) ); } } setTimeout(() => getGeo(itr, cb), 1000); }); }

We've made use of some helper functions along the way that help get Google JSON, and get and put GitHub JSON.

Now if we run this function in the portal, we'll see our output:

It works! Our serverless function updates our JSON file with all of the new data. I really like that I can work with backend services without stepping outside of JavaScript, which is familiar to me. We need only git pull and we can use this file as the state in our Vuex central store. This will allow us to populate the table, which we'll tackle the next part of our series, and we'll also use that to update our globe. If you'd like to play around with a serverless function and see it in action for yourself, you can create one with a free trial account.

Stay tuned for the next installment!

Article Series:
  1. Automatically Update GitHub Files With Serverless Functions (you are here!)
  2. Filtering and Using the Data (coming soon!)

Exploring Data with Serverless and Vue: Automatically Update GitHub Files With Serverless Functions is a post from CSS-Tricks

Building a Progress Ring, Quickly

Mon, 10/09/2017 - 14:11

On some particularly heavy sites, the user needs to see a visual cue temporarily to indicate that resources and assets are still loading before they taking in a finished site. There are different kinds of approaches to solving for this kind of UX, from spinners to skeleton screens.

If we are using an out-of-the-box solution that provides us the current progress, like preloader package by Jam3 does, building a loading indicator becomes easier.

For this, we will make a ring/circle, style it, animate given a progress, and then wrap it in a component for development use.

Step 1: Let's make an SVG ring

From the many ways available to draw a circle using just HTML and CSS, I'm choosing SVG since it's possible to configure and style through attributes while preserving its resolution in all screens.

<svg class="progress-ring" height="120" width="120" > <circle class="progress-ring__circle" stroke-width="1" fill="transparent" r="58" cx="60" cy="60" /> </svg>

Inside an <svg> element we place a <circle> tag, where we declare the radius of the ring with the r attribute, its position from the center in the SVG viewBox with cx and cy and the width of the circle stroke.

You might have noticed the radius is 58 and not 60 which would seem correct. We need to subtract the stroke or the circle will overflow the SVG wrapper.

radius = (width / 2) - (strokeWidth * 2)

These means that if we increase the stroke to 4, then the radius should be 52.

52 = (120 / 2) - (4 * 2)

So it looks like a ring we need to set its fill to transparent and choose a stroke color for the circle.

See the Pen SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.

Step 2: Adding the stroke

The next step is to animate the length of the outer line of our ring to simulate visual progress.

We are going to use two CSS properties that you might not have heard of before since they are exclusive to SVG elements, stroke-dasharray and stroke-dashoffset.


This property is like border-style: dashed but it lets you define the width of the dashes and the gap between them.

.progress-ring__circle { stroke-dasharray: 10 20; }

With those values, our ring will have 10px dashes separated by 20px.

See the Pen Dashed SVG ring by Jeremias Menichelli (@jeremenichelli) on CodePen.


The second one allows you to move the starting point of this dash-gap sequence along the path of the SVG element.

Now, imagine if we passed the circle's circumference to both stroke-dasharray values. Our shape would have one long dash occupying the whole length and a gap of the same length which wouldn't be visible.

This will cause no change initially, but if we also set to the stroke-dashoffset the same length, then the long dash will move all the way and reveal the gap.

Decreasing stroke-dasharray would start to reveal our shape.

A few years ago, Jake Archibald explained this technique in this article, which also has a live example that will help you understand it better. You should go read his tutorial.

The circumference

What we need now is that length which can be calculated with the radius and this simple trigonometric formula.

circumference = radius * 2 * PI

Since we know 52 is the radius of our ring:

326.7256 ~= 52 * 2 * PI

We could also get this value by JavaScript if we want:

const circle = document.querySelector('.progress-ring__circle'); const radius = circle.r.baseVal.value; const circumference = radius * 2 * Math.PI;

This way we can later assign styles to our circle element.

circle.style.strokeDasharray = `${circumference} ${circumference}`; circle.style.strokeDashoffset = circumference; Step 3: Progress to offset

With this little trick, we know that assigning the circumference value to stroke-dashoffset will reflect the status of zero progress and the 0 value will indicate progress is complete.

Therefore, as the progress grows we need to reduce the offset like this:

function setProgress(percent) { const offset = circumference - percent / 100 * circumference; circle.style.strokeDashoffset = offset; }

By transitioning the property, we will get the animation feel:

.progress-ring__circle { transition: stroke-dashoffset 0.35s; }

One particular thing about stroke-dashoffset: its starting point is vertically centered and horizontally titled to the right. It's necessary to negatively rotate the circle to get the desired effect.

.progress-ring__circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%, }

Putting all of this together will give us something like this.

See the Pen vegymB by Jeremias Menichelli (@jeremenichelli) on CodePen.

A numeric input was added in this example to help you test the animation.

For this to be easily coupled inside your application it would be best to encapsulate the solution in a component.

As a web component

Now that we have the logic, the styles, and the HTML for our loading ring we can port it easily to any technology or framework.

First, let's use web components.

class ProgressRing extends HTMLElement {...} window.customElements.define('progress-ring', ProgressRing);

This is the standard declaration of a custom element, extending the native HTMLElement class, which can be configured by attributes.

<progress-ring stroke="4" radius="60" progress="0"></progress-ring>

Inside the constructor of the element, we will create a shadow root to encapsulate the styles and its template.

constructor() { super(); // get config from attributes const stroke = this.getAttribute('stroke'); const radius = this.getAttribute('radius'); const normalizedRadius = radius - stroke * 2; this._circumference = normalizedRadius * 2 * Math.PI; // create shadow dom root this._root = this.attachShadow({mode: 'open'}); this._root.innerHTML = ` <svg height="${radius * 2}" width="${radius * 2}" > <circle stroke="white" stroke-dasharray="${this._circumference} ${this._circumference}" style="stroke-dashoffset:${this._circumference}" stroke-width="${stroke}" fill="transparent" r="${normalizedRadius}" cx="${radius}" cy="${radius}" /> </svg> <style> circle { transition: stroke-dashoffset 0.35s; transform: rotate(-90deg); transform-origin: 50% 50%; } </style> `; }

You may have noticed that we have not hardcoded the values into our SVG, instead we are getting them from the attributes passed to the element.

Also, we are calculating the circumference of the ring and setting stroke-dasharray and stroke-dashoffset ahead of time.

The next thing is to observe the progress attribute and modify the circle styles.

setProgress(percent) { const offset = this._circumference - (percent / 100 * this._circumference); const circle = this._root.querySelector('circle'); circle.style.strokeDashoffset = offset; } static get observedAttributes() { return [ 'progress' ]; } attributeChangedCallback(name, oldValue, newValue) { if (name === 'progress') { this.setProgress(newValue); } }

Here setProgress becomes a class method that will be called when the progress attribute is changed.

The observedAttributes are defined by a static getter which will trigger attributeChangeCallback when, in this case, progress is modified.

See the Pen ProgressRing web component by Jeremias Menichelli (@jeremenichelli) on CodePen.

This Pen only works in Chrome at the time of this writing. An interval was added to simulate the progress change.

As a Vue component

Web components are great. That said, some of the available libraries and frameworks, like Vue.js, can do quite a bit of the heavy-lifting.

To start, we need to define the view component.

const ProgressRing = Vue.component('progress-ring', {});

Writing a single file component is also possible and probably cleaner but we are adopting the factory syntax to match the final code demo.

We will define the attributes as props and the calculations as data.

const ProgressRing = Vue.component('progress-ring', { props: { radius: Number, progress: Number, stroke: Number }, data() { const normalizedRadius = this.radius - this.stroke * 2; const circumference = normalizedRadius * 2 * Math.PI; return { normalizedRadius, circumference }; } });

Since computed properties are supported out-of-the-box in Vue we can use it to calculate the value of stroke-dashoffset.

computed: { strokeDashoffset() { return this._circumference - percent / 100 * this._circumference; } }

Next, we add our SVG as a template. Notice that the easy part here is that Vue provides us with bindings, bringing JavaScript expressions inside attributes and styles.

template: ` <svg :height="radius * 2" :width="radius * 2" > <circle stroke="white" fill="transparent" :stroke-dasharray="circumference + ' ' + circumference" :style="{ strokeDashoffset }" :stroke-width="stroke" :r="normalizedRadius" :cx="radius" :cy="radius" /> </svg> `

When we update the progress prop of the element in our app, Vue takes care of computing the changes and update the element styles.

See the Pen Vue ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Note: An interval was added to simulate the progress change. We do that in the next example as well.

As a React component

In a similar way to Vue.js, React helps us handle all the configuration and computed values thanks to props and JSX notation.

First, we obtain some data from props passed down.

class ProgressRing extends React.Component { constructor(props) { super(props); const { radius, stroke } = this.props; this.circumference = radius * 2 * Math.PI; this.normalizedRadius = radius - stroke * 2; } }

Our template is the return value of the component's render function where we use the progress prop to calculate the stroke-dashoffset value.

render() { const { radius, stroke, progress } = this.props; const strokeDashoffset = this.circumference - progress / 100 * this.circumference; return ( <svg height={radius * 2} width={radius * 2} > <circle stroke="white" fill="transparent" strokeWidth={ stroke } strokeDasharray={ this.circumference + ' ' + this.circumference } style={ { strokeDashoffset } } stroke-width={ stroke } r={ this.normalizedRadius } cx={ radius } cy={ radius } /> </svg> ); }

A change in the progress prop will trigger a new render cycle recalculating the strokeDashoffset variable.

See the Pen React ProgressRing component by Jeremias Menichelli (@jeremenichelli) on CodePen.

Wrap up

The recipe for this solution is based on SVG shapes and styles, CSS transitions and a little of JavaScript to compute special attributes to simulate the drawing circumference.

Once we separate this little piece, we can port it to any modern library or framework and include it in our app, in this article we explored web components, Vue, and React.

Further reading

Building a Progress Ring, Quickly is a post from CSS-Tricks