Dukaan founder and CEO Suumit Shah revealed that 90% of the company’s support staff has been laid off after the introduction of an AI chatbot to answer customer support queries.
The big problem with wanting to use AI (which generally means LLM these days) is that it lacks real creativity. If a problem isn’t documented, the AI won’t know what to do about a particularly difficult support request, or it will give wrong answers all together. My time in CS for tech taught me that the number of novel resolutions is far, far greater than most people realize.
Though if the product is sufficiently defined, and bounded, it might make sense. Think support line for a fridge, oven, or other less-open products. Unbounded spaces like general purpose computer support will initially struggle while documentation is built up.
Wouldn’t any automated system ideally escalate to the next tier of (human) support when it detects something complicated?
In my experience, this never happens. Since they have now very few human staff they make it VERY difficult to talk to a human to the point you often give up.
First level support staff generally aren’t allowed to have creativity. They just follow the script and then pass the problem up when it’s something they can’t handle.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world
This is a most excellent place for technology news and articles.
The big problem with wanting to use AI (which generally means LLM these days) is that it lacks real creativity. If a problem isn’t documented, the AI won’t know what to do about a particularly difficult support request, or it will give wrong answers all together. My time in CS for tech taught me that the number of novel resolutions is far, far greater than most people realize.
I would agree, this was my first thought.
Though if the product is sufficiently defined, and bounded, it might make sense. Think support line for a fridge, oven, or other less-open products. Unbounded spaces like general purpose computer support will initially struggle while documentation is built up.
Wouldn’t any automated system ideally escalate to the next tier of (human) support when it detects something complicated?
Though I agree with you, I don’t think LLMs are lay-off 90% good.
Why escalate when you can hallucinate!
In my experience, this never happens. Since they have now very few human staff they make it VERY difficult to talk to a human to the point you often give up.
often times it will get actual documented solutions wrong too. is an example of the same type of concept implemented in the MDN
First level support staff generally aren’t allowed to have creativity. They just follow the script and then pass the problem up when it’s something they can’t handle.
often times it will get actual documented solutions wrong too. is an example of the same type of concept implemented in the MDN