Content Moderation
Manage user-generated content to maintain community standards and platform safety.
Overview
BookWish administrators can moderate two types of user-generated content:
- Lines - User-shared quotes and passages from books
- Reviews - Book reviews and ratings
Both content types have a moderationStatus field that controls visibility.
Moderation Statuses
Visible (Default)
- Content is publicly visible
- Appears in feeds and search results
- Default status for all new content
Hidden
- Content is hidden from public view
- Not shown in feeds or search results
- Original poster can still see it with a notice
- Can be unhidden if moderation decision reversed
Moderation Queue
Accessing the Queue
While a dedicated UI queue may not exist yet, you can identify content requiring review through:
- User Reports - Content with pending reports
- Automated Flags - Content matching spam/abuse patterns (if implemented)
- Manual Review - Periodic review of recent content
Priority Content
Review these items first:
- Content with multiple reports
- Reports marked as high-severity (hate speech, harassment)
- Viral content with reports
- Repeat offender content
Reviewing Content
Review Process
Step 1: Context Gathering
- Read the complete content
- Review any images or media
- Check surrounding context (book, thread)
- Review reporter's stated reason
Step 2: Guideline Evaluation
- Does it violate community guidelines?
- Is it spam or promotional?
- Is it off-topic or low-quality?
- Does it contain harassment or hate speech?
Step 3: History Check
- Review user's previous content
- Check for prior violations
- Look for patterns of abuse
- Consider user tier and standing
Step 4: Action Decision
- Determine appropriate action
- Document reasoning
- Apply action consistently
- Follow escalation procedures if needed
Moderation Actions
Hide Content
Use the hide content endpoint to remove content from public view:
API Endpoint:
PUT /api/admin/content/:type/:id/hide
Authorization: Bearer <admin_token>
Parameters:
type- Eitherlineorreviewid- The content ID
Example:
curl -X PUT https://api.bookwish.com/api/admin/content/line/abc123/hide \
-H "Authorization: Bearer <admin_token>"
When to Hide:
- Clear guideline violations
- Spam or promotional content
- Harassment or targeted abuse
- Hate speech or discrimination
- Misinformation (in reviews)
- Inappropriate or NSFW content
- Copyright violations
Unhide Content
Restore previously hidden content if moderation decision is reversed:
API Endpoint:
PUT /api/admin/content/:type/:id/unhide
Authorization: Bearer <admin_token>
When to Unhide:
- Moderation mistake or false positive
- User successfully appeals
- Context clarifies content is acceptable
- Guidelines updated to allow content
No Action (Dismiss Report)
If content doesn't violate guidelines, dismiss the report without hiding:
API Endpoint:
PUT /api/admin/reports/:id
Authorization: Bearer <admin_token>
Content-Type: application/json
{
"status": "dismissed",
"notes": "Content does not violate community guidelines"
}
Content Types
Lines
What They Are:
- Quotes and passages from books
- Shared by users to highlight favorite moments
- Can include page numbers and notes
- Public by default, appear in feeds
Common Issues:
- Copyrighted material (extensive quotes)
- Out-of-context offensive passages
- Spam (repetitive posting)
- Off-topic content
Moderation Considerations:
- Literary quotes may contain mature themes
- Context matters (what's the book's genre?)
- Educational vs. gratuitous content
- Author's intent vs. user's intent
Reviews
What They Are:
- User opinions and ratings of books
- Can include spoilers (should be marked)
- Help other users discover books
- Influence book recommendations
Common Issues:
- Spam or fake reviews
- Harassment of authors
- Off-topic rants
- Spoilers without warnings
- Promotional content
Moderation Considerations:
- Negative opinions are allowed
- Criticism vs. harassment
- Review authenticity
- Relevance to the book
Community Guidelines
Prohibited Content
Spam & Manipulation:
- Repetitive posts
- Promotional content
- Fake reviews
- Vote manipulation
Harassment & Abuse:
- Personal attacks
- Bullying or intimidating
- Doxxing (sharing personal info)
- Targeted harassment campaigns
Hate & Discrimination:
- Hate speech based on identity
- Discrimination
- Slurs and epithets
- Promotion of hate groups
Illegal Content:
- Copyright infringement (extensive copying)
- Illegal activity promotion
- Threats of violence
- Child safety violations
Misinformation:
- Demonstrably false information
- Harmful medical/health claims
- Conspiracy theories (context-dependent)
Allowed Content
Critical Reviews:
- Negative book reviews
- Criticism of ideas or works
- Disagreement with other reviewers
Mature Themes:
- Book quotes containing adult themes
- Discussion of mature book content
- Spoilers (if properly marked)
Strong Opinions:
- Passionate viewpoints
- Controversial takes on books
- Unpopular opinions
Best Practices
Consistency
- Apply guidelines uniformly
- Don't favor popular users
- Document decisions
- Review borderline cases with team
Transparency
- Explain moderation decisions when asked
- Publish guideline clarifications
- Share anonymized examples
- Update guidelines based on community feedback
Speed
- Review reports within 24 hours
- Prioritize high-severity issues
- Batch similar reports
- Automate obvious violations (future)
Context
- Consider cultural differences
- Understand literary context
- Check for satire or parody
- Review full threads, not isolated comments
Appeals
- Allow users to appeal decisions
- Review appeals fairly
- Admit mistakes when made
- Learn from reversed decisions
Escalation
When to Escalate
Escalate to senior admins or platform owners when:
- Legal concerns (copyright, defamation)
- Safety issues (threats, doxxing)
- High-profile users or content
- Unclear guideline application
- Potential platform risk
How to Escalate
- Document the situation thoroughly
- Note why escalation is needed
- Provide context and history
- Recommend action (if appropriate)
- Follow up on decision
Moderation Tools
Current API Endpoints
Hide Content:
PUT /api/admin/content/line/:id/hide
PUT /api/admin/content/review/:id/hide
Unhide Content:
PUT /api/admin/content/line/:id/unhide
PUT /api/admin/content/review/:id/unhide
Review Report (includes content link):
PUT /api/admin/reports/:id
Body: { status: "action_taken", notes: "..." }
Future Tools
Planned moderation features:
- Bulk actions (hide multiple items)
- Automated spam detection
- User reputation scoring
- Content filters and keywords
- Moderation queue UI
- Appeal system
Documentation
Record Keeping
Document all moderation actions:
- Content ID and type
- Action taken (hide/unhide/dismiss)
- Reasoning and guideline violated
- Report ID (if applicable)
- Admin performing action
- Timestamp
Resolution Notes
When reviewing reports, add notes explaining:
- Why content was/wasn't hidden
- Which guideline was violated
- Additional context
- Related reports or patterns
Example:
{
"status": "action_taken",
"notes": "Content hidden for harassment (targeting specific user by name). User has 2 prior warnings for similar behavior. Content clearly violates 'No Personal Attacks' guideline."
}
Common Scenarios
False Positive Report
Situation: User reports content they disagree with, but it doesn't violate guidelines.
Action:
- Review content against guidelines
- Dismiss report with explanation
- Mark as "reviewed" status
- Monitor reporter for abuse of reporting system
Borderline Content
Situation: Content is questionable but doesn't clearly violate guidelines.
Action:
- Discuss with other admins
- Consider community context
- Err on side of allowing content
- Document reasoning for future reference
- May clarify guidelines if pattern emerges
Repeat Offender
Situation: User has multiple content violations.
Action:
- Hide violating content
- Review all user's content
- Document pattern
- Consider account-level action (warning, suspension)
- See User Management
Mass Reports
Situation: Single piece of content receives many reports quickly.
Action:
- Prioritize immediate review
- Assess if coordinated/brigaded
- Review content thoroughly
- Take action if warranted
- Investigate source of reports if suspicious