Buy-now-pay-later giant. Replaced 700 agents with AI, then reversed course.
Klarna became the poster child for AI-driven workforce reduction when it announced its OpenAI chatbot handled 2.3M conversations in its first month, resolving 66% of all customer service chats. The company cut its workforce from 5,500 to 3,400, with CEO Sebastian Siemiatkowski proudly declaring AI could do the work of 700 agents. But the story took a turn in 2025: customer satisfaction scores dropped, error rates climbed, and Klarna quietly began rehiring human agents. The reversal revealed an uncomfortable truth — AI customer service works for simple queries but fails on nuanced, emotional, or complex financial disputes. Stock expectations fell 35% from IPO projections as the AI-first narrative collapsed.
Klarna deployed an OpenAI-powered chatbot that handled 2.3 million customer service conversations in its first month — the equivalent of 700 full-time agents. But after quality complaints mounted, the company reversed course and began rehiring humans in 2025.
Peak: 5,500 employees, $45.6B valuation, dominant BNPL player
Valuation crashes to $6.7B in down round, layoffs begin
OpenAI chatbot launches, handles 2.3M chats in first month
Workforce cut from 5,500 to 3,400, CEO touts AI efficiency
Customer satisfaction drops; Klarna begins rehiring human agents
Stock -35% from IPO expectations, AI-first narrative questioned
Deploy AI chatbots for Tier 1 customer support (FAQs, order status, simple disputes) while keeping humans for complex financial cases. Klarna's story proves the hybrid model wins — full AI replacement degrades quality, but AI handling 60-70% of volume dramatically cuts costs.
Audit your support tickets: categorize by type, complexity, and current resolution time
Build an AI agent trained on your FAQ, product docs, and top 100 resolved tickets
Set strict guardrails: the AI should never guess on financial amounts, refund policies, or account details
Deploy on chat with a clear 'Talk to a human' escape hatch — never trap users in AI loops
Route complex cases (disputes, complaints, account issues) directly to human agents
Monitor CSAT scores weekly — if satisfaction drops below threshold, expand human routing rules
You are a customer support agent for a buy-now-pay-later service. Use the knowledge base below to answer questions. Knowledge base: {{knowledge_base}} Rules: - Be empathetic and concise - For payment schedule questions, always show exact dates and amounts - NEVER guess refund amounts or policy exceptions — escalate to human - If the customer mentions financial hardship, immediately offer to connect with a specialist - Confirm understanding before taking any account actions Customer: {{message}}
Review these {{count}} AI-handled customer conversations and evaluate each on: 1. Accuracy — Did the AI provide correct information? 2. Completeness — Was the customer's issue fully resolved? 3. Tone — Was the response empathetic and professional? 4. Escalation judgment — Should it have escalated to a human? Did it escalate unnecessarily? 5. Policy compliance — Did it follow all stated guidelines? Flag any conversation where the AI made an error or the customer expressed dissatisfaction. Conversations: {{conversations}}
Analyze these {{count}} resolved support tickets and create a structured knowledge base: 1. Group into categories (billing, returns, account, technical, etc.) 2. For each category, write the ideal AI response template 3. Identify questions that MUST go to humans (list specific triggers) 4. Create a decision tree for multi-step issues (e.g., missed payment → check status → offer plan) 5. List exact phrases that indicate customer frustration (for auto-escalation) Tickets: {{tickets}}
Continuously feed resolved tickets back into the AI knowledge base to improve accuracy