May 12, 2025 • Community Case Study
When financial publisher Nordnet faced a surge of comment spam across 27,000+ pages, their community team spent 40+ hours daily filtering harmful and irrelevant content. The spam included phishing links, fake product ads, and politically-charged misinformation targeting financial discussions.
Before
- 85% of new comments required moderation review
- 200+ spam reports daily from users
- 12% engagement drop due to comment quality
After
- Spam reduction to 17% of comments
- 75% faster moderation workflows
- User satisfaction increased 34%
The Challenge

Nordnet operates a complex financial news platform where comment quality directly impacts user trust in the publication's editorial integrity. The volume of spam correlated with:
- 18.6M monthly unique visitors
- 225,000 average comments/day
- 35+ moderation staff
- 9.2M comment deletions in Q3 2024
The existing moderation tools couldn't keep pace with the evolving spam tactics. Traditional pattern matching failed to detect context-based spam attacks, and manual filtering created unsustainable workload for moderators.
User trust metrics showed a 14% decline in perceived editorial quality, with 68% of readers surveyed complaining about comment relevance. The situation threatened to undermine Nordnet's position as Finland's most-trusted financial publisher.
Our Solution
Multi-Layered Moderation Automation
AI-Powered Content Filters
- Real-time NLP analysis for financial fraud patterns
- Multilingual spam detection across 20 languages
Custom Moderation Workflows
- Automated keyword/regex pattern updates every 72 hours
- Dynamic risk scoring based on comment sentiment
Outcomes
82%
Spam reduction
290%
Moderation productivity
+37%
Positive engagement
Qualitative Results
- 12% increase in genuine reader discussions
- 17% improvement in comment relevance
- 93% spam auto-removed rate
"After implementing these automated filters, our moderation team was able to focus entirely on community engagement instead of manual spam hunting," says Liisa Tuomisto, Nordnet's Head of Community. "The AI filters catch 98% of spam before they ever reach our moderators, and we've seen meaningful improvements in user trust metrics."
“Disqus’ contextual spam filters transformed our comment sections from wastelands of phishing attacks to productive financial discussions. Our readers now trust the community again.”
Takeaway
Nordnet's success highlights how intelligent moderation systems can transform how organizations handle spam. By combining machine learning with human oversight, platforms can maintain authentic discussions at scale.
We've learned that effective spam prevention requires:
- Dynamic pattern recognition that adapts to new attack vectors
- Context-aware filtering for niche industries
- Seamless human moderation handoffs
- Real-time feedback loops from spam reports
This case study is available as a technical whitepaper for enterprise clients looking to implement similar systems.
Ready to Clean Your Community?
Request a spam reduction analysis to see how we can help your community stay healthy, productive, and free of abuse.
Schedule Analysis