CT No.60: Saving the web from a flesh-eating disease of our own making
Anticipating a Google algorithm update and getting rid of UX pestilence. Plus, a collaboration tool for product writers and developers
Thanks to all who completed the questionnaire about further education on Content Technologist topics last week. If you’re still interested in learning more about the topics covered in this newsletter, click the button below.
I’ll be responding to everyone this week.
In this week’s Content Technologist:
How to identify horrific website UX that will likely be penalized by an upcoming Google algorithm update
A review of development-friendly product content tool Strings
A monster list of links about race in media, best practices for blogging, fonts, trolling AI facial recognition software, and more
Four weeks of October internet horrors
One of my least favorite content/UX strategist traditions is shit-talking websites or content initiatives that others have clearly worked hard to make as good as they can.
Especially since I’m a fan of a cross-disciplinary approach to content, putting down others’ poorly structured information architecture or weird navigation items introduces negativity when, generally, most people who have worked on websites have put together what they think is a good user experience.
Whether you work in content strategy, SEO, journalism, content marketing, user experience, web development, product content, advertising, or social media: you want your audience to enjoy and find value in the work that you do. You don’t want them to have a bad time at the content party.
That said, 2020 has preyed upon our worst fears, our constant WFH means we’re spending wayyyy too much time online even when we would like to spend less, and all the little digital annoyances that used to seem like not-that-big-of-a-deal suddenly appear horrifically grotesque.
But it’s the month where we confront our greatest fears! I’m not referencing actual current events relating to the U.S. election, economic downturn and pandemic; I’m talking about scary movies!
For the remainder of October I’m describing some frightening content technological realities in context of our favorite scary movie tropes.
So, sit back, relax, and join me on a very creepy descent into website BODY HORROR.
The Content Technologist is free forever for the first 1,000 subscribers. And we’re almost there! So if you have any friends, colleagues, clients, supervisors, students, pets, children, or enemies who may dig this newsletter and might want to save a few dollars:
Somehow, it’s still alive: The web content disease that sprung from within
It all started with a surplus of cursed lemongrass.
After the abundance of late summer, October farmers’ market selections in the Twin Cities slim out significantly. Aside from ubiquitous honeycrisp apples and sad end-of-season tomatoes, we’re mostly left with roots, tubers and cabbage. To cheer myself up in the flavor desert, I snagged a massive bundle of southeast Asian staple lemongrass for a mere $2.
My initial recipe inspiration — ginger lemongrass carrot soup — used but one stalk! I had purchased eight. It’s lean times, I thought, and life bequeathed me lemongrass. Surely I could squeeze some value from the excess.
Little did I know the horror waiting in store for me.
Lemongrass recipes, I Googled.
I know enough about blind recipe searches to know that I should really rely on trusted sites, especially for unfamiliar recipes. I’ve certainly made some duds from unproven food blogs. But ah! some ideas! I tapped on a promising collection from a familiar media brand that had served me well in the past: 38 lemongrass recipe ideas. I barely loaded the navigation bar when…
Egads! The website crashed my mobile!
No worries, I thought, my trusty laptop computer will be able to handle this household food media brand’s recipe ideas collection.
After all, I’d optimized recipe collections just like this one for several household food brands and media companies. This brand sat atop search results for a reason: its high-quality content and well-rated recipes around lemongrass.
I clicked into the gaggle of Thai- and Vietnamese-inspired dishes. And I waited.
After a few seconds I impatiently scrolled down to the first recipe in the collection, whose image was a bit blurry because no CDN? homegrown media company servers? Anyway, the header text indicated a beef recipe and I try not to eat too much beef so I kept scrolling.
Imagine my horror when, after several severals of seconds the website was not fully loaded.
Suddenly my eyes began to twitch. All the pictures on the website flashed, as if my vision was blurred.
Every time I’d scroll to an image or recipe the content would shift downward or rightward or off my screen as a new content block loaded.
I bet the ad is taking so long to load because it’s personalized, I thought. It will certainly be an ad relevant to my interests, perhaps additional lemongrass recipes or maybe a cool plastic kitchen gadget like a 3D-printed lemongrass-shaped box to store spare lemongrass.
But instead of something clever and new, the advertisement was strangely familiar... as my eyes began to bleed I realized that I was viewing an ad for a SaaS where I’ve been a paying client for more than 18 months. I see this ad at least 3 times a day. (Srsly SEMRush, please cookie me or something.)
Approximately 30 seconds later a grand total of 10 of the 40 recipe ideas — and none of the recipes themselves, as those are on separate pages — had fully loaded.
And then a video sidebar started autoplaying an insurance commercial.
Raiders of the Lost Ark is hardly a horror movie, but surprisingly few “eyes bleeding” gifs show up in search, and, well, this one’s a classic.
Diagnosing a bad website in 2020
Bad website experiences for content-heavy are rarely any one person’s or any one team’s fault. Like all inadequate systems, bad websites are piles of mediocre decisions and unsatisfying compromises, usually due to several of the following:
Misunderstanding scope: We have no idea how much work actually solving the problem will be, and more often than not it’s way more complicated than we originally intended.
Lack of resources: We don’t have enough money, human brains or time to do the best possible development, testing and execution of the planned initiative.
Band-aids and oversimplification: We use plug-ins and third party software to remedy complex problems, instead of holistically incorporating into the whole website ecosystem.
No product ownership: Each piece of the web puzzle—content, development, advertising, subscriptions, customer service, optimization—is created by siloed teams who don’t work together and don’t understand how their single tiny remedy fix affects the whole web experience.
Desperation: We need more money and are willing to compromise user experience for some extra cash or just an end to a project that’s been going on too long.
Add in the ever-mutating factor of rapid tech change — finicky optimization guidelines, sexy new javascript frameworks, indecipherable data management and privacy regulations, decaying back-end servers that can’t handle fancy front-end ambitions, faster devices, slower mobile networks, and inconsistent user behaviors — and we’ve got a website horrorshow reminiscent of early David Cronenberg films.
We can’t avert our eyes because we’re deeply in need of a new soup recipe.
But as web, UX and content strategists we can start with some diagnostics:
What is my audience’s state of mind (aka intent) when they reach a certain page from search, email, social or any other channel?
What content do they see first? Not the content actually written on the page, but the navigation, the images loading, the related content widgets, and the ad units.
Does content shift on the page as it loads on any device?
In the above story, I’m a user literally looking for ideas, expecting that I’m going to click through to another recipe as I’ve been trained to do during the past 15 years of researching recipes online. All I really need is a list of recipe names, maybe an identification of any special ingredients or equipment needed for a recipe, and maybe a thumbnail image.
A recipe collection is what I’d consider a gateway page. It directs users to other pieces of content. It doesn’t need to be more complex than a categorized list. But it does need to load quickly and avoid frustrating the user.
Understanding core web vitals and similar metrics
Earlier this year, Google announced that its Core Web Vitals metrics would become instrumental in its algorithm sometime in 2021. These vital signs are:
Largest contentful paint - how long it takes for content to fully load on your website. Google says it should be “within 2.5 seconds” of the First Contentful Paint, a metric SEOs and devs long used to identify how long it takes for the first significant identifiable content to load on your website.
First input delay - how long it takes for an interactive website action, like a click, to display results. Google says it should be 100 milliseconds.
Cumulative layout shift - how much the actual content on the page moves around as it loads. In Google’s metrics, this measure should be .01.
The powers that be have also acronymed these metrics for your ongoing confusion, and you can view LCP, FID and CLS under the engagement reports in Search Console.
Historically, Google’s ideal technical thresholds have always been so high that it’s extremely difficult for any website to meet the requirements, so much that I’ve seen marketers and developers dismiss them altogether. (“Even Google.com doesn’t meet its own PageSpeed requirements!” a development partner once said.)
My extremely basically designed and Ghost-hosted secure website still registers as Poor for CLS, probably because of my lefthand sidebar. I don’t think the .1 CLS threshold is remotely reasonable for most websites with complex information. But that doesn’t mean you should throw out the metric entirely.
Instead, benchmark and aim for improvement. Look at your website — ad blockers off, with an incognito window. Would the content shifts be annoying to someone who is not only casually searching for information, but reading that information as well?
Benchmarking core web vitals
When you need hard data to benchmark your own and others’ sites: Use Google’s Lighthouse and PageSpeed tester. To corroborate data from the big G, use WebPageTest.org (yes, it’s a real website and it’s great!). But these tests are mostly lab emulators,* which means that they replicate the experience of a page load. They often produce false positive results, a common result of computers automating human jobs.
And you’re making content for humans who don’t want their eyes to bleed out.
My guidelines for benchmarking:
If they come up with negative results, yikes! You probably have the data you need to make the case for investing in better web performance. But you should still run your own tests.
If they come up with positives — lots of green lights — cool. But corroborate with your own data from using the site.
Look at your website — ad blockers off, with an incognito window. Use screen video recorders if you need to document the experience (varies depending on device, OS, but they’re easy to find).
If the website is slow on desktop or patchy, that’s bad. That means it’s even slower on mobile, and in 2020 we optimize for mobile experiences.
If the website is fast on desktop, you should still test on mobile.
As the website is loading, consider the mindset of your user. Based on the page’s core topic and the kinds of search queries that attract them, are they searching for quick information or in-depth explanation?
User experience and page load metrics are crucial to keep readers around, and our goal with content is to build regular revenue-generating audiences. Neither advertisers nor users want a frustrating content experience, and you won’t make any money if your website doesn’t load.
Not only that — they are going to become ranking factors in the Google search algorithm, which means that you stand less of a chance of driving traffic you don’t have to pay for. Unlike social networks, whose content algorithms have historically changed at whim, Google is becoming more transparent about exactly what it values.
Read more about core web vitals from Moz and Backlinko, the only two devoted SEO blogs that don’t have an absolute garbage web experience.
But I still hadn’t figured out what to do with my lemongrass yet…
What UX evil hath I wrought in the name of content value?
After about ten minutes of waiting and scrolling and waiting and scrolling, I’ve tabbed up ten or so potential recipes. I’m still shuddering from my earlier browsing frights as each tab turns from wheel-of-anticipation to fully loaded favicon.
On the most promising recipe, I scrolled down to the ingredients. I have most of these in house, I thought. Ginger, coconut milk, chicken stock…
Suddenly the entire list of ingredients had disappeared. Where had they gone? I scrolled down, assuming that a content unit had pushed them farther on the page, but suddenly I was in the reviews section.
“I made this recipe except I subbed out soy milk for coconut milk and lemons for lemongrass and it did not turn out as expected. One star,” one comment read.
“Excellent recipe,” another reader wrote. Brevity is the soul of filler content, I guess.
My stomach gurgled a little bit. After all, I’d been searching for recipes so long I was half an hour late to begin dinner prep.
Turns out I actually needed to scroll up from the reviews to read the full recipe. This whole process making me nauseous. I should just stick to all those cookbooks I bought at the beginning of the pandemic. But I was at the point of no return.
Jotting down the ingredients, I scanned the method. All seventeen preparatory steps were lumped into one giant hunk of text, never to be broken down into an ordered list that’s been a part of web design since the early days of html. Still, no matter: at least everything was in one place. Anyway, if the steps were individually <li>ed out, I’d have to deal with some sort of video in between each item.
As I read I blanched at the sight of a never before seen horror. My stomach dropped as my own words echoed in my head, slowed down so that they were super deep, like a demon or maybe some misused autotune.
These tormenting words, all of which I’d said years ago as I advised a client, Use your popular content pages to surface your related content that might interest users. It will increase time on site.
Those words, and others like them spoken from content strategists around the world in the mid-2010s, had led to this:
A rough approximation of my gastro-intestinal state after reading the phrase “wet burger”:
Resolving the web experience horrorshow
The above experience is a hyperbole, of course, but all examples are real — albeit, a compilation of two separate well-known food media publications.
Google hasn’t yet announced the timing of the core web vitals update, but we need to acknoweldge and anticipate the impact it may cause. For many content-heavy websites with ads and dynamic elements, these algo updates may bring significant losses in traffic, so you won’t even get to gross out your audience with clickbait once they arrive!
When we review our web strategies for 2021, let’s ask: how can we make this less of a giant, overgrown, field of open sores? What does this website need to be? How can we reimagine it so we grow an audience? Do we really need to surface a recipe called “wet burger” with no additional context, even if it gets a total of 240 searches per month because of a critically panned restaurant from a meme chef? Will the clickbait annoy as many people as it attracts?
Questions to mitigate the impact of the core web vitals update
Sometime in the next year, the core web vitals update will be implemented, and most of these in-depth technical fixes take significant time and collaboration to execute.
Is our website putting users in a frustrated state before they even get to the content?
Are all the dynamically populated content widgets actually making the content experience better?
Are all of the elements on the page serving a helpful purpose, or are they still there because we “need them for SEO”? (i.e., if CLS and LCP are going to be ranking factors, decide whether a plethora of old reviews are worth the extra load on the site speed)
If we had one less dynamic or automated element on the page, would it speed up the site enough to make a difference?
Is our website accessible to all people? (Accessibility and privacy are far more important than aesthetics, but they’re not as in-your-face as the body horror described here.)
Are people regularly using our chat widgets and newsletter pop-ups, or has the novelty worn off?
Is our audience up to extra annoyances in their lives? For example, if we’re targeting busy parents, will the site benefit from annoying them, or are we driving them away?
Are we presenting the content in the best way for the purpose it is supposed to serve?
If we structured content differently and removed ads, would our audience pay a subscription fee for more convenience? (Massive opportunity for this in recipe pubs with an existing library, fwiw.)
Identify your risk tolerance: how much revenue could you lose from removing, I dunno, two ad units from your page?
Or if you removed the native content modules and replaced them with more custom ad units?
What if you updated the page content to contain more in-depth information, something more relevant to the post-pandemic audience who is willing to sit and learn a little longer?
Check all your annoyance metrics: the core web vitals above, bounce rate, exit rate. Calculate any year-over-year changes in volume of traffic, session duration and time on page from organic search. If the content experience is that poor, you can almost always find several data patterns to back it up.
In the coming decade, especially with the experience of living digitally in 2020, websites that resemble Rosemary’s baby will gradually become irrelevant. We certainly have some fresh horrors in store down the pike, but we can keep the good stuff from festering.
Meanwhile, drop your favorite cookbook recommendations — or your own website horror stories— in the comments. Until the recipe website mess is fixed, I’d rather spend holiday cooking season with some good old-fashioned print.
Next week: It’s coming from inside the house!
Frightened, entertained, or enlightened by this week’s essay? Please
Product writers and developers, unite!: A review of Strings
The early Silicon Valley ethos hinged on engineers building algorithms, user interfaces, graphic design, data collection, and product copy themselves, disdaining the well-honed practices of writers and editors in favor of shipping products quickly.
In the 2010s, the rise of fields like content strategy and content design — and the clear benefits of diverse, collaborative, multidisciplinary thinking — have led more product developers to understand the value of trained writers.
However, the workflow software remained separated. Developers used Github or other similar tools, and writers used Microsoft Office or Google Docs.
When creating or editing product microcopy — from button text to error messages and everything in between — most UX content strategists manually scanned every app and screen, typing out changes to CTAs or navigation items in docs or spreadsheets for developers to copy-paste into their code workflow.
Afterward, everyone QAed (followed quality assurance, or proofreading, processes) the changes after they were made. For both developers and writers, this process has long been a royal pain riddled with errors, compromises and duplicated work.
But a few tools have been recently developed to better structure UX writing for both writers and developers. This week’s tool, Strings, enables product writers to audit and submit corrections into a development workflow.
Strings at a glance
Developers and editors/writers each have their own professional vernacular for certain tasks, even though the actions are essentially the same across disciplines. Throughout this review, I’m going to refer to the development term, followed by the relevant writer/editor term in parentheses.
Like so: Quality assurance/QA (proofreading).
Strings links text editing directly with Github, the industry standard development workflow tool. Once the dev team links Strings with Github, writers can pull all copy strings (sentences, words) from a project (giant hunk of code) or individual code snippets (paragraphs), then they can rewrite and make any needed changes only to text without changing touching the code itself.
The text edits then become pull requests (track changes/suggested edits) that are communicated back to Github. Developers can merge (accept) or close (delete/stet) the changes to the text within their Github workflow. Based on the demo, it’s slick.
Since I’m not using Strings myself, I spoke with a Strings customer, who said the product generally made the product writing clearer for both writers and developers, and eliminated some of the copy/paste work on his end. Strings is still working out a few kinks — mainly around automating communication of product changes — but he noted the customer service has been highly personalized and incredibly responsive for its early adopters.
The customer also mentioned: Strings isn’t really appropriate for minor text changes every now and then, but works best during a larger, more focused effort on product writing as a whole.
Based on this discussion, I recommend Strings for:
Product teams who are actively auditing and editing large amounts of copy
Writers and editors who want more visibility into how their brand voice is reflected throughout an application
Development teams who wish editorial changes weren’t so time consuming
Strings is absolutely a helpful tool for collaboration between wordsmiths and coders and merging multiple specialties into a single product. It solves a very specific problem within product collaboration, helping two separate professional disciplines speak the same language.
Are you new to The Content Technologist? You can
and it’ll be free forever if you’re one of the first 1,000 email addresses on my list.
Content tech links of the week
Ironically, I was reading this piece in Wired about the ineffectiveness of digital ads, and the entire screen shifted so I completely lost my place while I was three paragraphs in. An ad for a Rolex or similar luxury good had loaded, merging both the worst of ad tech and magazine snobbery to completely disturb my reading experience while pouring salt on my non-affluent wound.
Media 2070 is a deeply researched essay about historical racism in media, newspaper profit from the slave trade to the 20th century policies that solidified media ownership in the hands of white men and the current coverage of the Movement for Black Lives. It’s worth a read no matter what field you work in.
On the racist history of certain typefaces, from Ceros Originals.
The Berkman Klein Center study is a must-read for understanding how legacy mass media still has far greater effects on our democracy than any social network.
Margaret Sullivan’s commentary on the Berkman Klein study in The Bezos Post is also worth a read.
This week the House Antitrust Subcommittee, chaired by Rep. David Cicilline (D-RI), released a report on big tech platforms, and Matt Stoller’s roundup of its findings is fantastic. I can’t wait to dive into the 400-pager Cicilline Report, especially the sections on big Google.
Should that link automatically open a new tab? Usually no! No it shouldn’t! But in some circumstances, maybe. An update of old research from the Nielsen Norman Group.
Visit The Content Technologist! About. Ethics. Features Legend. Pricing Legend.