Open-text comments have always been an incredibly valuable source of employee insight, revealing nuance and emotion that quantitative surveys alone can’t capture. They are often time-consuming to analyze, however, yet historically worth the extra time and effort it takes to comb through so much data.
As technology has evolved, it’s made the task increasingly easier, first with Machine Learning algorithms like OrgVitality’s UQ, which quickly sorts comments by how useful or actionable the comments are, as well as sorting them by sentiment, job level, or other factors. Natural language processing (NLP), large language models, and automated summarization have made the task even easier, offering quick summarizations. Used well, AI can dramatically accelerate insight generation. Used poorly, it can flatten meaning, introduce bias, and create false confidence.
AI should absolutely help with open-ended comment analysis, but survey practitioners should take care to do so carefully, thoughtfully, and with human judgment at all steps along the way. Here’s our guide to everything you need to know about AI and open-ended comment analysis:
Where AI Adds Genuine Value
AI excels at computational tasks and can operate at a scale and speed unavailable to humans. Organizations used to spread comment analysis among dozens of employees who might take months to sift through the comments looking for themes, but AI can process tens of thousands of comments in minutes. This alone improves representativeness. It can also detect patterns quickly, especially across large data sets, and may highlight issues that might be missed by manual review. Lastly, AI is well-suited for generating an initial coding structure, especially when comments are short, focused, and consistently framed.
[Read More: Generative AI at Work]
Where AI Consistently Falls Short
The most common errors in AI-enabled open-text analysis occur when practitioners treat AI outputs as conclusions rather than inputs. Comments may reflect mixed emotions or have nuanced feedback that AI misses. AI also won’t understand any organizational-specific issues, such as leadership dynamics or recent change initiatives; even if fed this information, it may not process it perfectly. Often, acronyms aren’t understood, sarcasm is missed, and the overall interpretation is a bit off the mark. Too many people treat AI-generated summaries as objective truth, even when the underlying data or logic is thin. Practitioners must actively counter this tendency with appropriate reviews and packaging for the intended audience.
[eBook: Evaluating AI in HR Teach]
Joint AI and Human Analysis
The most efficient and effective analysis will incorporate AI to make the survey practitioner’s role more impactful, rather than replacing it. Use AI to process the full comment set, identify high-level themes, and identify anything unexpected. This provides coverage and efficiency. Humans must review (or ideally, provide) theme definitions, read representative comments, and look for tensions, subthemes, and contradictions that AI glosses over. They should then interpret the results in the context of the organization, previous survey results, and more. Most importantly, humans should craft the narrative that leaders receive.
Preserving Trust with Employees
One often-overlooked dimension of AI-driven open-text analysis is employee trust. If employees believe their comments are being “summarized by a machine,” they may alter what they say. As always, OrgVitality strongly recommends clear and concise communications that outline confidentiality protections and share how feedback will be used, which includes noting that while AI may help summarize comments, humans will ultimately interpret and act on the feedback.