Customer Feedback Analysis
Build a feedback analysis system with sentiment analysis, automatic categorisation, trend detection, and actionable insight dashboards.
The feedback analysis problem
Customer feedback comes from everywhere: support tickets, app store reviews, NPS survey responses, social media mentions, sales call notes, and feature request forms. Most businesses collect this feedback but struggle to use it. A support manager might read 50 tickets a day but cannot see that 30 percent mention the same billing issue. A product manager reads feature requests but cannot quantify which features are most demanded. Feedback analysis automates the reading, categorisation, and quantification of customer feedback so decision-makers get actionable summaries instead of raw text. Ask Claude Code: Create a Next.js project with TypeScript, Tailwind, and Prisma for a customer feedback analysis platform. Define the database schema. FeedbackItem (id, source as support_ticket or review or survey or social or sales_note, sourceId for deduplication, text as the feedback content, customerId optional, customerEmail optional, createdAt, processedAt optional, sentiment as positive or negative or neutral or mixed, sentimentScore as a float from -1.0 to 1.0, categories as JSON array, urgency as low or medium or high or critical, actionItems as JSON array of extracted action items, metadata as JSON for source-specific data). Category (id, name, description, parentId optional for hierarchy, feedbackCount). Trend (id, categoryId, period as daily or weekly or monthly, startDate, feedbackCount, averageSentiment, delta as change from previous period). Seed with 200 sample feedback items across all sources. Ask Claude Code: Generate realistic customer feedback covering common themes: billing complaints, feature requests, praise for support team, confusion about pricing, bugs in the mobile app, and onboarding difficulties. Include a mix of sentiments and urgency levels.
Sentiment analysis engine
Sentiment analysis determines whether feedback is positive, negative, neutral, or mixed. The simplest approach uses keyword matching with weighted scores. Ask Claude Code: Create a sentiment analyser at src/lib/sentiment.ts. Build a lexicon-based analyser with three components. A sentiment lexicon: a dictionary mapping words to sentiment scores. Positive words like excellent (+3), good (+2), helpful (+1), easy (+1). Negative words like terrible (-3), broken (-2), frustrating (-2), confusing (-1). Intensifiers like very (multiply the next word's score by 1.5), extremely (multiply by 2.0), and slightly (multiply by 0.5). Negators like not, never, no (flip the next word's sign). The analyser splits text into words, looks up each word's sentiment score, applies modifier logic (intensifiers and negators), sums the scores, and normalises to a -1.0 to 1.0 range. Classify the overall sentiment: above 0.2 is positive, below -0.2 is negative, between -0.2 and 0.2 is neutral, and text with both strong positive and strong negative signals is mixed. Ask Claude Code: Enhance the analyser with phrase-level detection. Some phrases have sentiment that the individual words do not capture: not bad is mildly positive (not just a negation of bad), could be better is negative (even though better is positive), and used to love is negative (implying they no longer love it). Add a phrase dictionary that overrides word-level analysis when these patterns are detected. Test the analyser against the sample data. Ask Claude Code: Run sentiment analysis on all 200 sample feedback items. Print a summary: how many positive, negative, neutral, and mixed. Show 5 examples where the sentiment seems wrong and adjust the lexicon. The goal is not perfection — 80 percent accuracy from a keyword-based system is good. For the remaining 20 percent, the categorisation and human review process catches the errors. Common error: sarcasm defeats keyword-based sentiment analysis. The phrase Great, another update that breaks everything scores positive on keywords but is clearly negative. Flag feedback with conflicting signals (positive words in a negative context) for human review.
Automatic categorisation and tagging
Categorisation answers the question: what is this feedback about? Is it about pricing, the mobile app, customer support, a specific feature, or onboarding? Ask Claude Code: Create a categorisation engine at src/lib/categoriser.ts. Define a category hierarchy: Product (subcategories: Performance, Bugs, UI/UX, Features), Support (Response Time, Quality, Channels), Pricing (Confusion, Value, Billing Issues), Onboarding (Documentation, Tutorial, First Experience), and Competition (Comparisons, Switching, Missing Features). Each category has trigger keywords and phrases. Ask Claude Code: Build the keyword-based categoriser. Product > Bugs triggers on: bug, broken, crash, error, does not work, not working, stopped working, glitch. Product > Features triggers on: wish, would be great, feature request, please add, can you add, need, missing. Support > Response Time triggers on: waiting, hours, days, no response, slow response, still waiting. Pricing > Confusion triggers on: pricing, cost, how much, expensive, price page, billing, charged, overcharged. Allow multiple categories per feedback item — a single message might mention both a bug and a pricing complaint. Rank categories by relevance (number and strength of keyword matches). Add urgency detection. Ask Claude Code: Create an urgency classifier that scores feedback based on emotional intensity and business impact. Critical urgency: words like urgent, immediately, legal, cancel, lawsuit, data loss, security. High urgency: words like frustrated, unacceptable, switching, competitor, broken, angry. Medium urgency: words like disappointed, issue, problem, concerned, difficult. Low urgency: general suggestions, mild complaints, neutral observations. The urgency score combines word intensity with sentiment negativity — strongly negative feedback with urgency keywords is critical. Extract action items. Ask Claude Code: Scan feedback for explicit requests and implicit needs. Explicit: Please add dark mode, Can you fix the export button, I need to be able to filter by date. Implicit: I spent 20 minutes trying to find the settings page (implicit need: improve settings discoverability). Extract these as structured action items with a description and the originating feedback ID. Common error: category keywords overlap. The word slow could refer to app performance (Product > Performance) or support response time (Support > Response Time). Use context windows — check the surrounding words to disambiguate. Slow app, slow loading, slow page suggest performance. Slow response, slow reply, waiting suggest support.
Trend detection and pattern analysis
Individual feedback items are anecdotes. Trends are data. A single complaint about slow loading times is noise. Fifty complaints in one week is a signal that something broke. Ask Claude Code: Create a trend detection system at src/lib/trends.ts. Calculate metrics per category per time period (daily and weekly). For each category, track: feedback volume (how many items), average sentiment (is it getting better or worse), and velocity (is volume increasing or decreasing compared to the previous period). Detect trend changes. Ask Claude Code: Implement change detection using simple statistical methods. For each category, calculate the rolling 7-day average of daily feedback volume. When today's volume exceeds the average by more than 2 standard deviations, flag it as a spike. When the 7-day average increases by more than 50 percent compared to the previous 7 days, flag it as an emerging trend. When a category's average sentiment drops by more than 0.3 points in a week, flag it as a sentiment shift. Generate trend alerts. Ask Claude Code: When a trend change is detected, create an alert with: the category name, the type of change (spike, emerging trend, sentiment shift), the magnitude (feedback about Pricing increased 120 percent this week), sample feedback items that contributed to the trend (show 3 representative items), and a suggested action (Investigate recent pricing changes or Review the latest app update for performance regressions). Build a correlation analyser. Ask Claude Code: Check for correlations between feedback trends and business events. If you log business events (product releases, pricing changes, marketing campaigns, outages) in a simple event table, the system can automatically suggest causes: Feedback about Performance Bugs spiked 300 percent — this correlates with the v2.4.0 release 2 days ago. This turns reactive feedback analysis into proactive product intelligence. Add competitive intelligence from feedback. Ask Claude Code: Scan feedback for competitor mentions. When customers mention competitors by name (I am switching to Competitor X, Competitor Y has this feature), log the competitor, the context (what they are being compared on), and the sentiment. Build a competitive insights report showing which competitors are mentioned most, what features drive comparison, and whether competitor mentions are increasing or decreasing. Common error: trends in small volumes are unreliable. If you normally get 3 items per day in a category, a jump to 6 is a 100 percent increase but not statistically meaningful. Set minimum thresholds for trend detection — only flag changes when the base volume exceeds 10 items per period.
Feedback collection and integration
The analysis is only as good as the data coming in. Connect all your feedback sources into the pipeline. Ask Claude Code: Build source connectors at src/connectors/. Create a support ticket connector that reads from a database table (simulate with a JSON file for development). Map the ticket fields to FeedbackItem: the ticket body becomes the text, the ticket priority maps to initial urgency, and the customer email links to the customer. Create a review connector for app store reviews. Ask Claude Code: Build a connector that processes app store reviews. Accept a CSV export from App Store Connect or Google Play Console. Parse each review into a FeedbackItem with the review text, star rating (map to initial sentiment: 1-2 stars is negative, 3 is neutral, 4-5 is positive), review date, and the app version mentioned. Create a survey connector. Ask Claude Code: Build a connector for NPS (Net Promoter Score) survey responses. Accept a CSV with columns for score (0-10) and comment. Map the NPS score to sentiment: promoters (9-10) are positive, passives (7-8) are neutral, and detractors (0-6) are negative. The comment text goes through the full analysis pipeline. Add an email inbox connector. Ask Claude Code: Create a connector that processes feedback emails. Accept forwarded emails in a standard format: from address, subject, body text. Strip email signatures, quoted replies, and formatting to get the core feedback text. Categorise based on the email subject and body. Build a manual entry form. Ask Claude Code: Create a feedback submission page at src/app/feedback/submit/page.tsx. Internal team members (sales, support, customer success) can paste feedback they have received and select the source. This captures verbal feedback from calls and meetings that would otherwise be lost. Add deduplication. Ask Claude Code: Before processing a new feedback item, check for duplicates. Compare the text against recent items using a simple similarity measure (shared word percentage). If two items share more than 80 percent of their words and come from the same customer within 24 hours, flag as a duplicate and link them. This prevents the same feedback submitted through multiple channels from inflating the counts. Common error: feedback from different sources has different formats and quality. App store reviews are short and opinionated. Support tickets are detailed and specific. Survey responses are brief. NPS comments can be one word. Your analysis pipeline must handle all these formats — do not assume all feedback is multi-sentence prose.
Dashboard, reporting, and deployment
The dashboard turns raw analysis into business decisions. Ask Claude Code: Create the main feedback dashboard at src/app/dashboard/page.tsx. Show these components. Sentiment overview: a large gauge chart showing the overall sentiment score across all recent feedback (last 30 days), with trend arrow showing whether sentiment is improving or declining. Below the gauge, show sentiment by source (support tickets average -0.3, reviews average +0.2, surveys average +0.1). Category breakdown: a treemap chart where each rectangle represents a category, sized by feedback volume and coloured by average sentiment (green for positive, red for negative). Clicking a category shows the subcategories and recent feedback items. Trend alerts: a prominent panel showing active trend alerts sorted by severity. Each alert shows the category, what changed, and the recommended action. Recent feedback stream: a scrollable feed of the latest 20 feedback items, each showing the source icon, a sentiment indicator, categories as tags, and the first 100 characters of the text. Click to expand and see the full item. Build automated reports. Ask Claude Code: Create a weekly feedback report that is generated automatically every Monday at 8 AM. The report includes: this week's feedback volume compared to last week, top 3 categories by volume (with representative quotes), sentiment trend chart for the last 8 weeks, new emerging issues (categories that appeared this week but were not present in the previous 4 weeks), resolved issues (categories that were trending last month but have returned to baseline), and a recommended action list prioritised by impact (volume times sentiment severity). Add a product team view. Ask Claude Code: Create a page focused on feature requests and bug reports. Extract all feedback categorised as Features or Bugs. Group feature requests by the specific feature mentioned (use keyword clustering: all feedback mentioning dark mode is one group, all mentioning API is another). Show each group with: the request count, representative quotes, average urgency, and a Vote count from internal team members who agree this should be prioritised. Deploy to Vercel with a PostgreSQL database. Ask Claude Code: Configure the production deployment. Set up connectors to run on a schedule (hourly for support tickets, daily for reviews and surveys). Configure the weekly report email. Test with real feedback data from at least two sources to verify the full pipeline works end-to-end.
Customer Intelligence
This guide is hands-on and practical. The full curriculum covers the conceptual foundations in depth with structured lessons and quizzes.
Go to lesson