Lystd Content Moderation Policy
Last Updated: March 2026
Lystd uses a combination of automated systems, user reporting, and human review to maintain a safe, respectful, and authentic environment. This policy explains how content is evaluated, rated, and enforced across the platform.
1. Moderation Approach
Lystd uses multiple layers of moderation, including:
Automated Systems
- Keyword and phrase detection
- Image and media scanning tools
- Machine-learning classifiers for safety risks
- Automated flagging for suspicious behavior
User Reporting
- In-app reporting for posts, profiles, messages, and behavior
- Prioritization based on severity and volume of reports
Human Review
- Manual review by trained moderators
- Escalation for high-risk or sensitive cases
- Final decisions made by human moderators
Moderation is applied consistently and fairly across all users.
2. Content Rating System
Lystd uses a three-tier content rating system to classify posts and determine visibility and enforcement actions.
Rating 0 — Normal
Content appropriate for general audiences, including social meetups, friend connections, networking, group activities, casual hangouts, conversations, and non-explicit adult topics. Normal content is visible to all users.
Rating 1 — Sensitive
Content that may include mature themes but does not violate platform rules. Examples include mature discussions, suggestive language, strong profanity, non-explicit adult topics, and mildly provocative imagery (non-sexual).
Sensitive posts must be marked by the user or moderation system, are hidden unless users enable "Sensitive Content" in settings, and may be reviewed if repeatedly reported. Sensitive content cannot include nudity intended for sexual arousal or explicit sexual acts.
Rating 2 — Blocked
Content that violates Lystd's rules and is removed immediately.
Sexual Content Violations: Escorting or sexual services; explicit pornography; sexual acts or nudity intended for arousal; content involving minors; revenge porn or non-consensual intimate images.
Safety Violations: Harassment, bullying, or stalking; threats or encouragement of violence; self-harm encouragement; hate speech or discriminatory content.
Illegal or Dangerous Activity: Drugs or illegal substances; weapons or violence; fraud, scams, or impersonation; human trafficking or exploitation.
Platform Abuse: Spam or mass messaging; bot activity; manipulation of visibility or signals; ban evasion.
Blocked content may result in account restrictions or removal.
3. User Reporting
Users may report posts, profiles, messages, behavior, and safety concerns. Reports are reviewed based on severity, number of reports, past violations, and risk to user safety. Posts receiving multiple reports may be automatically hidden pending review. False or malicious reporting may result in enforcement actions.
4. Enforcement Actions
Depending on the severity and frequency of violations, Lystd may take:
Content-Level Actions
- Content removal
- Content restriction (Sensitive rating)
- Temporary hiding pending review
Account-Level Actions
- Warning notifications
- Feature restrictions
- Temporary suspension
- Permanent account ban
- Device or phone-number ban for severe cases
Legal & Safety Actions
- Escalation to law enforcement when required
- Preservation of evidence for investigations
Enforcement decisions are made based on context, severity, and user safety.
5. Appeals
If you believe moderation action was taken in error, you may contact support@quailsofts.com. Not all decisions are reversible, especially those involving safety risks or legal obligations.
6. Platform Safety & Continuous Improvement
Lystd continuously improves its moderation systems by updating automated detection tools, training moderators on new safety risks, reviewing policy effectiveness, enhancing user reporting tools, and monitoring emerging trends in online harm. Our goal is to maintain a safe, respectful, and authentic community.
7. Contact
For moderation questions or concerns: support@quailsofts.com