Great Customer Service Smoothes Out Bad Self-Service

Success at switching to a truly bundled Disney+ and Hulu experience (both with no ads) from the janky status quo where both services were billed separately and Hulu had ads but Disney+ didn’t required the great customer service experience I had earlier today. In prior months, I’d made the mistake of following the instructions provided as the self-service approach to accomplishing this, and failed miserably. I switched from annual billing to monthly on Disney+ and tried to switch to the Premium Duo multiple times over multiple months, only to be redirected to Hulu and be blocked from signing up for what I wanted.

Today I tried the chat option (with a live human being) and finally got the bundle I wanted–and a refund for the price differential between the new bundle and what I’d been paying. It ultimately took being manually unsubscribed from both Disney+ and Hulu, which the customer service rep accomplished by reaching out to whatever department and systems she needed to, in the span of about 20 minutes. Definitely a 5-star customer service experience–unfortunately made necessary by terrible self-service options.

Plenty of companies almost certainly believe that they will be able to use ChatGPT (or something like it) to replace the people that do this work. But at least initially (and probably for quite awhile after that) the fully-automated customer service experience is likely to be worse (if not much worse) than the experience of customer service from people. I’m very skeptical of the idea that an AI chatbot would have driven the same outcome from a customer service interaction as a person did in this case. And this is in a low-stakes situation like streaming services (some number of which will very likely end up on my budget chopping block in 2024). High-stakes customer service situations will not have the same tolerance for mistakes, as shown in the FTC’s 5-year ban on Rite-Aid using facial recognition for surveillance. These are the sorts of mistakes warned about in the documentary Coded Bias years ago, but I have no doubt that other companies will make the same mistakes Rite-Aid did.

In an episode of Hanselminutes I listened to recently, the host (Scott Hanselman) used a comparison of how AI could be used between the Iron Man suit and Ultron. I hope using AI to augment human capabilities (like the Iron Man suit) is the destination we get back to, after the current pursuit of replacing humans entirely (like Ultron) fails. Customer service experiences that led by people but augmented by technology will be better for people on both sides of the customer service equation and better for brands.

Will AI Change My Job or Replace It?

One of my Twitter mutuals recently shared the following tweet with me regarding AI:


I found Dare Obasanjo’s commentary especially interesting because my connection to Stack Overflow runs a bit deeper than it might for some developers. As I mentioned in a much older post, I was a beta tester for the original stackoverflow.com. Every beta tester contributed some of the original questions still on the site today. While the careers site StackOverflow went on to create was sunsetted as a feature last year, it helped me find a role in healthcare IT where I spent a few years of my career before returning to the management ranks. Why is this relevant to AI? Because the purpose of Stack Overflow was (and is) to provide a place for software engineers to ask questions of other software developers and get answers to help them solve programming problems. Obasanjo’s takeaway from the CEO’s letter is that this decade-plus old collection of questions and answers about software development challenges will be used as input for an AI that can replace software engineers altogether. My main takeaway from the same letter is that at some point this summer (possibly later) Stack Overflow and Stack Overflow for Teams (their corporate product) will get some sort of conversational AI capability added, perhaps even without the “hallucination problems” that have made the news recently.

Part of the reason I’m more inclined to believe that [chatbot] + [10+ years of programming Q & A site data] = [better programming Q & A resource] or [better starter app scaffolder] instead of [replacement for junior engineers] is knowing just how long we’ve been trying to replace people with expertise in software development with tools that will enable people without expertise to create software. While enough engineers have copied and pasted code from Stack Overflow into their own projects that it led to an April Fool’s gag product (which later became a real product), I believe we’re probably still quite some distance away from text prompts generating working Java APIs. I’ve lost track of how many companies have come and gone who put products into the market promising to let businesses replace software developers with tools that let you draw what you want and generate working software, or drag and drop boxes and arrows you can connect together that will yield working software, or some other variation on this theme of [idea] + [magic tool] = [working software product] with no testing, validation, or software developers in between. The truth is that there’s much more mileage to be gained from tools that help software developers do their jobs better and more quickly.

ReSharper is a tool I used for many years when I was writing production C# code that went a long way toward reducing (if not eliminating) a lot of the drudgery of software development. Boilerplate code, variable renaming, class renaming are just a few of the boring (and time-consuming) things it accelerated immensely. And that’s before you get to the numerous quick fixes it suggested to improve your code, and static code analysis to find and warn you of potential problems. I haven’t used GitHub Copilot (Microsoft’s so-called “AI pair programmer) myself (in part because I’m management and don’t write production code anymore, in part because there are probably unpleasant legal ramifications to giving such a tool access to code owned by an employer), but it sounds very much like ReSharper on steroids.

Anthony B (on Twitter and Substack) has a far more profane, hilarious (and accurate) take on what ChatGPT, Bard, and other systems some (very) generously call conversational AI actually are:

His Substack piece goes into more detail, and as amusing as the term “spicy autocomplete” is, his comparison of how large language model systems handle uncertainty to how spam detection systems handle uncertainty provides real insight into the limitations of these systems in their current state. Another aspect of the challenge he touches on briefly in the piece is training data. In the case of Stack Overflow in particular, having asked and answered dozens of questions that will presumably be part of the training data set for their chatbot, the quality of both questions and answers varies widely. The upvotes and downvotes for each are decent quality clues but are not necessarily authoritative. A Stack Overflow chatbot could conceivably respond with an answer based on something with a lot of upvotes that might actually not be correct.

There’s an entirely different discussion to be had (and litigation in progress against an AI image generation startup, and a lawsuit against Microsoft, GitHub, and OpenAI) regarding the training of large language models on copyrighted material without paying copyright holders. How the lawsuits turn out (via judgments or settlements) should answer at least some questions about what would-be chatbot creators can use for training data (and how lucrative it might be for copyright holders to make some of their material available for this purpose). But in the meantime, I do not expect my job to be replaced by AI anytime soon.