Safety on 7 Cups
Last updated: March 8, 2024
Individual and Community Safety on 7 Cups
7 Cups aims to be a safe, trusted resource for giving and receiving emotional support. We take matters of confidentiality, privacy, safety and all forms of harassment very seriously. We have a series of policies, procedures, and programs in place to ensure safety across the platform.
- Identification and Participation: We monitor and actively participate in 1:1 chat, Noni chat, group chat, and community forums in order to identify unsafe activity and to role model, encourage, and reward healthy emotional support behaviors.
- Validation and Intervention: Our active participation is complemented by several programs to confirm or disconfirm reported activity, and to respond appropriately.
- Iteration : We are constantly updating our approach to incorporate user feedback and include a wider range of machine learning techniques, product enhancements, engineering features, training resources, and support for our users.
The following Terms and Policies govern our site:
- Terms of Service: The goal of this document is to prevent misuse or abuse of our services. It governs our right to suspend, ban, or stop providing our services if policies or terms of service are not followed, or if we are investigating suspected misconduct.
- Privacy Policy: The goal of this document is to describe how we collect, use, disclose and store personal information provided to us through the website and mobile application.
- Community Guidelines - The goal of this document is to promote a safe, warm, comfortable, inviting, and supportive atmosphere for those seeking support and to our fellow Listeners. It contains General, Forum, and Teen Mentor Guidelines, Consequences of Violating Guidelines, and the bios of our Community Management Team.
- Teen Safety - The goal of this document is to explain the extra measures we have in place to protect our users under 18. We take the safety of our teen population very seriously with protocols for general safety, reporting of sexual abuse, crisis, and crime reporting.
- General Support and FAQs - The goal of this document is to provide answers to support queries including "what is active listening" and "What do I do if my listener is being inappropriate, abusive, or hurtful?".
Users are first made aware of and agree to our Terms of Service and Privacy Policy in the sign up process. They permanently live on the footer of our website, and are persistent in our community. They are updated as necessary; users are alerted and required to agree to updates.
7 Cups employs a sophisticated and mature set of safety measures informed by the experience of mental health professionals, and aligned with online best practices for building safety into social environments that address the needs of vulnerable user populations.
What happens when a user reports another user?
- Members can leave Text Reviews for Listeners and all users can file Block Reports and Profile Flags in real time
- Censor Reports are automatically triggered based on specific phrases
- Supervised volunteers review reports and flags and categorize them into one of three escalation levels
- Green - Text Reviews are approved to display on the Listener's profile
- Yellow/Orange - A Listener who criticizes or provides advice instead of being empathetic will automatically receive feedback when flagged. Five+ flags at this level will result in rejection from site
- Red flag - Sexual and flirtatious content, harassment, bullying, racist or hate speech, false reports of age group (e.g. teen with an adult account) results in immediate rejection from the community. When a user is flagged for these behaviors they are blocked from engaging until subsequent human review.
- Each report generated is assigned a risk score based on the severity of the report (e.g. requesting contact information, harassing behavior, inappropriate sex chat.)
- The reporting individual is assigned a trust score based on their overall activity and impact on the site.
- Risk calculations are cumulative and not publicly displayed.
Sanctions for reports in order of escalation:
- Direct feedback correspondence
- Mandatory self-care breaks
- Account automatically rejected
- Account banned
Additional interventions:
- Group support warnings via moderators and automatic mute functionality
- Customer support team on call to manage reports
- Forum flagging tool used for spam and inappropriate behavior,
- Dedicated Safety and Knowledge Forum to discuss or report any issues
- Community leadership teams onsite 24/7 who are trained to manage and support a variety of situations
- 50+ trainings for listeners available 24/7, Peer support, Mentor support, Moderators who can remove inappropriate content
- Phone verification required for Members and Listeners
- Bi-weekly Internet Safety Discussions
- Anonymous Evaluations every week - like a secret shopper program, where listeners do not know they are talking to a tester. The tester completes the evaluation, sends the listener the results and provides one-on-one mentoring to improve their skills.
- Members cannot talk with other members, except in group settings. Members can only talk directly with listeners.
- Crisis intervention - users must indicate that they are not suicidal, homicidal, or abusing anyone prior to using 7 Cups. If they enter information into the chat that suggests they are in crisis, then this language is not passed through to the listener. Instead the user is immediately referred to 988 or the appropriate hotline.
- Pinned messages are at the top of every chat. The first one reads, "Safety & Reporting - Staying safe online starts with protecting your privacy. DO NOT share your personal details, contact info or social media handles. Your safety is our top priority. If you feel unsafe, please visit our Safety and Reporting Center to learn how we can best help.” The second one is a list of hotlines and reads, ”Your listener is here to help. However, if you are in crisis, please click here for a helpline."
Machine Learning/AI Efforts:
We also deploy computational linguistic models of user behaviors that are likely to distinguish between banned and non-banned members, rejected and non-rejected active listeners. We use this as a means of monitoring the level of potentially unsafe language throughout the platform to guide awareness of overall activity.