20 Comments
User's avatar
User's avatar
Comment deleted
Jul 30
Comment deleted
Expand full comment
Timothy B. Lee's avatar

Hi Anna, Substack tells me they are going to reach out to you directly. I hope they're able to address your issue. Good luck!

Expand full comment
Rob Nelson's avatar

Limited customer service assistance seems like a good fit for where LLMs are now, but as you point out, "limited" is the right way to use them. Right in this case means augmentation of human effort by answering the easy questions automatically and evaluating the complex questions in a way that sets up a human to answer quickly. More transparency so that customers know who they are writing/talking to is an approach that would make me more enthusiastic.

To your point, companies seldom develop Key Performance Indicators (KPIs) around customer frustration. Too hard to measure. But for those who care, simply letting your customers know who or what they are dealing with will reduce frustration. Giving me some imperfect, but automated help AND a clear way to access a human who can help will go a long way toward separating me from my money.

Expand full comment
Chris Guest's avatar

I have been very impressed by the Substack Decagon chatbot and use it regularly. It recently helped me unpick the confusing state of follower numbers I was seeing across my publications.

What I find interesting is that as a voracious consumer of AI product release news across Product Hunt and several daily product newsletters, I see variations of “a chat bot trained on your own data” as one of the most common product propositions. So it must be a very crowded market.

And yet, Substack is the only good implementation I’ve ever seen in the wild. Every other support chatbot I’ve used as a consumer is complete garbage. Most are still just dumb logic based implementations which just try to route you an unhelpful FAQ page asap.

This has to be one of the best use cases for practical Gen AI that will survive the coming backlash/winter. As a user I hope it does!

Expand full comment
Joanna Piros's avatar

Thank you for sharing the back story of Substack's chatbot. It has impressed me since the outset, not only for the quality of the information it shares, but also for the friendly conversational interface. If I say Good morning and make small talk, it responds as a friend would. Scary and impressive. I have used so many helpbots whose normal answers are irrelevant and/or wrong, and simply a hoop to jump through before typing AGENT in all caps until a real human takes over. With Substack's chatbot, which really deserves a catchy name, I've never had to resort to all caps!

Expand full comment
Daniel Nest's avatar

I think the chatbot is generally quite helpful for generic queries but I've experienced it hallucinating Dashboards and settings that weren't there when I asked about niche, specific things (as LLMs are prone to do).

But overall a pretty smooth implementation for sure!

Expand full comment
mcsvbff bebh's avatar

I work on an internet service that interfaces with substack. I tried to send them a bug report about this service, but they seem to have no public-facing support beyond the chatbot. After going around in circles with the chatbot trying to report this simple issue, I asked if they could have a person email me. The chatbot said yes, they would email <address> right away, but the problem is that I didn't provide an email address, so it just made one up. I tried multiple times to correct it and direct it to send to the correct email address, but multiple weeks later I haven't received an email. Extremely frustrating and actively hostile.

We're going to enter a place soon where having an actual person answer support will be a differentiator for businesses.

I have no idea where this quick and easy human is but tell them I'm awaiting an email.

Expand full comment
Timothy B. Lee's avatar

I'm sorry to hear that! If you email me ( tim -at- understanding ai -dot- org ) with identifying details I'll be happy to make sure someone at Substack sees it.

Expand full comment
mcsvbff bebh's avatar

How? By backchanneling to an actual person you deal with? I will probably do that if I get around to it, but the issue isn't that important. My point is that this flies in the face of your article, which in my opinion is just way off base. I'm sorry a chatbot that will make up email addresses when asked to email you is not ready for production, let alone worthy of a glowing review.

Expand full comment
Salvador Lorca 📚 ⭕️'s avatar

Can I write to you with my substack problem?

Expand full comment
Timothy B. Lee's avatar

Feel free but I’m on vacation so it might be a little while before I can get to it.

Expand full comment
Salvador Lorca 📚 ⭕️'s avatar

Thanks a lot.

I wrote this in the comments: Let's see, I think he (the AI) is good, and that, when faced with basic questions, 80% of the time he can give good answers IF HE IS GIVEN ALL THE INFORMATION, NOT JUST OLD INFORMATION.

The problem is that, when it cannot help, it says that it passes it to a human support, and there comes the doubt: It doesn't pass the problem and decides to keep it, or it passes it and the human support doesn't have time to answer, or they simply don't help because they have other priorities?

In the end, the business model depends on this. Sometimes, it's better to hire people than to spend a lot of money on an AI that you don't feed with the latest data and makes things up.

I've published some stuff about the answers it gives, if you want I'll send it to you.

Expand full comment
Salvador Lorca 📚 ⭕️'s avatar

The same problem

Expand full comment
Isaac King's avatar

> It also provided Decagon’s software with access to Substack’s subscriber database so the chatbot can look up user-specific information or take actions like canceling subscriptions and offering refunds.

I hope the chatbot is only given access to *that user's* information and subscription? Otherwise this presents a major theft and vandalism risk as soon as someone figures out how to jailbreak the chatbot to send them the email address or cancel the subscription of every Substack user.

Expand full comment
Michael Spencer's avatar

Have you actually tried to use it when you have a real issue? It's cost management at at a customer experience trade-off. Not a great sign.

Expand full comment
Riccardo Vocca's avatar

There is an interesting paper that says that if we give bad news we should use a chatbot, for the good news, a human person - obviously in customer relations. The analysis you have done here on the motivations, implications and possible followers of Substack's approach and choices on the chatbot are very interesting. I believe that as you specify, the point is to recognize its limits and not become constantly dependent on it: for a brand and platform like Substack with great affection from the community, the chatbot represents real effort towards writers and people who have problems. Feeling that these problems are actually being solved only increases the bond with the platform and the commitment.

Expand full comment
Jojo's avatar

"Of course, there’s a risk it won’t be implemented well. We’ve all had the experience of calling a company on the phone and reaching an automated system instead of a human being. It’s usually possible to get a person on the line, but companies don’t necessarily make it easy."

------

Ha ha ha . Comcast! When I used to be their customer, I found that raising my voice and yelling "AGENT" repeatedly usually worked to get a human. Cursing loudly worked also. I wonder if these bots are programmed to react to loud and incesed yelling using curse words?

Expand full comment
Jojo's avatar

I wouldn't mind using an intelligent bot but few of them are. I wonder if Substacks can explain why I never get logged in and ALWAYS have to get prompted to enter my password. Using u-to-date version of Chrome.

Expand full comment
Salvador Lorca 📚 ⭕️'s avatar

Quick? Well, I still waiting weeks for human support.

Expand full comment
Meng Li's avatar

The Substack chatbot is still very useful, essentially acting as a personal assistant. This is also a trend; in the future, platform customer service will be handled by AI, reducing labor costs.

Expand full comment
Markus Mars's avatar

The chatbot is the worst part of Substack because it doesn't help with problem-solving. If I run into an issue, I look for it on Substack's support pages. The search function is helpful, and this would also be a great place for the chatbot to act as a search engine. However, if I can't find a solution, I want to be able to reach out to a Substack expert who is capable of helping me solve a problem that lies beyond the average "Did you restart your computer yet?" approach.

The main issue with chatbots like Substack's is that they contribute to people's inability to solve complex problems within a reasonable time frame. Instead, it creates a culture of dumb question-asking before reading resources, which, in turn, increases "support" requests (as stated in your article) and makes it harder to get help from an actual human being with a problem that the chatbot can't solve and that lies beyond Substack's FAQ pages.

In conclusion, I am not a fan of this chatbot. It repeats itself, contradicts itself, and bluntly lies about escalating cases. I've run into an issue I've been trying to resolve for two months, and I've not heard back from anyone at Substack. I reported the issue week after week and the chatbot told me the same old lie of escalating the case week after week. It even gave me false information regarding an email I should send to an email address, only to receive an error stating the email address didn't exist.

Expand full comment