Anonymous Evaluations Team - Update
Updated January 13, 2022: We have eliminated the Feedback Team. The AE Team still exists. I have updated the post to reflect this change
Updated July 26, 2021: If you are already a mentor and want to join AE Team, Please apply here! You can also apply for the Anonymous Evaluations Feedback (AEF) Team here! Team leads have been updated on all applications. You can only be on one team at a time, not both, so pick which one works for you best.
Please note both applications require you to be a mentor. If you are not a mentor, please apply to be a Quality Mentor here and indicate you would like to join the AE Team as the AEF Team requires being a mentor (any track) for 2+ months.
___
(Original Post)
A few months ago, we announced (here) that we wanted to bring back the Anonymous Evaluations (AE) Team. We received a lot of feedback on that post, all of which has been incredibly valuable for helping to guide our planning and testing. Thank you!!
Here are just a few examples of questions that came out of the feedback that weve been working very hard to figure out how to address:
How do we set up anonymous evaluations in a way that complements other quality projects without being redundant?
Which listeners should we contact for anonymous evaluations?
What criteria are we using to evaluate listeners?
How do we ensure that we are evaluating listeners in a fair, consistent, and relatively objective way?
Who will provide feedback and how will they provide it?
How do we give feedback in a way that will help listeners improve while remaining positive and compassionate?
At this point, we feel like we are ready to proceed with officially announcing this program has been green lit!
We will be forming criteria and an application shortly now that the process has been determined. See my end note for more details.
This is what we are planning to implement:
All listeners will be eligible for anonymous evaluations. This felt like the fairest approach to take. Other quality initiatives are in the works that we can use to focus on specific groups like new listeners. We will be doing evaluations through both Personal Requests and General Requests. Each listener will be evaluated only once.
There will be two separate teams, one for doing the actual evaluations and another for providing feedback to listeners. For now, both teams will be recruited from mentors. There will be quotas to ensure that this program reaches enough listeners to have a meaningful impact on overall listener quality.
Each evaluator will be given a separate member account that they will use only for anonymous evaluations.
For each chat, the evaluator will pick a topic that is not too intense/triggering. The topic should also be something familiar enough to the evaluator that the chat can feel realistic. For PR chats, if the listener has any topics listed in their profile, wed pick from one of those topics. For GR chats, the evaluator would pick the topic when they submit the request.
Wed expect most chats to last about 20 minutes, but it could be longer or shorter depending on how long the evaluator needs to gauge the listeners overall quality.
After each chat, the evaluator will fill out a form that has a list of possible issues that could have occurred. For each issue, the evaluator will mark whether it occurred and if so how severe it was (either minor or major). Evaluators will be trained beforehand on what each issue means and how to determine the severity level. The form will also include comment boxes where evaluators will provide details/examples for any issues that occurred and describe anything else that the listener did well or poorly.
Here is the current list of issues were planning to evaluate:
Sexual/inappropriate chat
Bullying/harassment
Underage listener
Requesting/sharing contact info
Ghosting/incorrect blocking
Empathy
Professionalism
Advice-giving
Response time
Talking about own problems
Imposing beliefs/views
For each listener that's been evaluated, someone from the feedback team will review the form responses and contact that listener by PM to let them know they've been evaluated. Well use compliment sandwiches, provide resources/links, frame feedback as an opportunity for further growth/training rather than a form of punishment, etc.
If a listener did well, the evaluator can leave a positive review. Evaluators wont ever leave mixed or negative reviews. The introductory active listening training will be updated to notify listeners that this program is happening and that listeners may receive an AE chat at some point.
The team leaders for this team will be:
@EvelyneRose - Admin oversight of the whole project
@MidnightRaven999 - AE Team Ambassador Lead Evaluator
@InternalAcceptance- AE Team Ambassador
@FrenchMarbles- Mentor Leader
What stage are we at now?
This is just an announcement post. We need to figure out team logistics first before adding anyone to the team, but all team members will need to be a Mentor (adult side) or Teen Star (teen side) to join. If you are already a Mentor/Teen Star, any track qualifies you to apply. If you are not yet a Mentor or Teen Star, you will need to fill out a Quality track application. Details below.
If you are already a mentor/teen star: We will have an application shortly that we will post in the next few weeks. As is the standard rule of all teams, just applying will not automatically get you on the team. Your app must be accepted first.
If you are not yet a mentor/teen star on any track, but would like to join this project: you must fill out a Quality Mentor/Teen Quality Star (QM/TQS) app here. You will not automatically be able to join this team just by applying. You will need to pass the Quality Mentor app first. This means that your application must go through processing and be accepted. If accepted, we will send you onto the team leaders to be trained.
Spots will be filled according to need. We will have the application ready over the next few weeks.
Please reach out to any of us if you have any questions! And as always, constructive feedback, questions, concerns, comments, etc. can go below :) I'm going to let @Midnightraven999 answer most of the questions as well so that you can get to know them :)
[updated by Evelynerose on July 13, 2022]
This is an awesome initiative to maintain over listener quality on 7 Cups. As stated in a previous post, this is how businesses use "mystery shoppers" to evaluate how their corporate standards for service and promptness are being uniformly carried out in their stores.
An independent quality program like this should evaluate all listeners for the 7C standards of service and promptness. The execution and outcomes of AE should not be tied to any program or rewards (i.e., badges, cheers, etc.) Listeners should always strive to be at their best when listening to members. Unfortunately, if they are not, then they are not really in line with the standards of service here. The AE always results in a teaching/coaching opportunity.
We also can't ignore the elephant in the room. 7C needs to simultaneously identify any listeners who may not be using the site for the purposes it is intended.
I'm glad to see that this initiative is finally going to come to be. In the beginning, it may seem like 7C is "cleaning house", but in reality, they're just trying to keep the "house" in order.
@EvelyneRose I don't understand this part.
"If a listener did well, the evaluator can leave a positive review. Evaluators wont ever leave mixed or negative reviews."
Is it about the review forum popping up at the end of every chat?
@kindSoul10 that's how I understood it, that if the listener did well, the AE "member" could leave a positive review just the same way that a regular member could, like the end-of-chat rating/review -- sort of a way to make sure the listener's time and efforts weren't wasted or not acknowledged :)
@TaranWanderer thanks for replying.
Please excuse me being offtopic but i realized there are weird characters in your posting reminding me of a software error. Did you use the app or the browser to post your reply?
@kindSoul10 yeah I've been seeing them all over the site for a few days :( I'm using the browser (chrome) on my phone to reply
@TaranWanderer thank you. What's your systems language settings?
@TaranWanderer
it's a bug it should be fixed soon
@EvelyneRose thought so :) I assumed someone was doing some changes to the site and created a little bug or something :p
@kindSoul10 Yup, that's right. Like if someone from the AE team chats with a listener and has an amazing chat experience where they really want to leave a positive review (like a normal member would), they're welcome to do that.
@EvelyneRose wow this excites me very much Rose. Anything dealing with good ways on how to strengthen our listening community I am always in to try and love doing so. I remember back yonder I was fully into evaluating annonymously listeners. It was good but idk if we were making a change-but more so putting personal judgement on the listener. So I am so happy you all are really working on this to make it so clear for us.
Keep up the great work
@EvelyneRose
I think this is a great initiative and a good way to ensure quality listening. It is all the more important that I often have to deal with members who have been very disappointed with other listeners and sometimes even conned into sharing too much information.
This initiative would really help weeding out some folks who really don't belong as a listener, or ought to have an opportunity to learn how to be a better listener.
Thanks!!
@Fradiga Thanks! We also hope this helps with listener quality.
@EvelyneRose
Given the cyclopean of the task, I suggest there should be a prioritization according with the impact each listener has on the overall quality of 7Cups.
Namely, I would suggest that, if possible, controls start with the listeners that connect with the more members per week or month. While they are the "heroes" for their huge contribution, they are also those that have the most impact on the 1-1 chat service quality. So the other side of the coin is that they have the highest "damage capability" to the Quality of 7Cups.
I would use quantity of members served, and not "chats" as ranking metric, because the "chats" variable (as I know it) is affected by the length of each conversation.
In any case, the idea is that someone that takes 5 chats a day, every day, has 35 times more impact on overall quality than someone that takes 1 chat a week.
So by doing 1 test to the first one, we can potentially improve quality 35 times more than doing 1 test to the later.
In principle, I would not make any difference between Verified and Non-Verified listeners, as this will complicate things. A Verified listener could have "played by the rules" during the mock chat, while behaving very differently in regular chats.
But if the amount of work is extremely overwhelming, Verified Listeners could be excluded in the first round.
@WelcomeToChat
Good thoughts! Tagging @Quietmagic and @midnightraven999 so they see!
@WelcomeToChat these are some really helpful suggestions! we will definitely keep these in mind as we move forward!
@WelcomeToChat
Thanks for the idea! While we don't have much control over who accepts GRs, for the PRs we can pick who we contact, so maybe prioritizing there could help make our evaluations as impactful as possible.
That makes sense what you're saying that if a listener is chatting with more members per day, they'd have a bigger impact on overall listener quality, either for better or worse.
I'm just not sure if we have the ability to be able to prioritize listeners in this way since the AE team would probably only be using the "Browse Listeners" search and our own list we'd maintain of previously evaluated listeners. "Chats" is available for sorting but it might not be helpful since it's total chats rather than chats per day, and also listeners with more total chats would probably have fewer quality issues.
@QuietMagic
Thank you, Josh, for taking the trouble to consider so in depth my humble suggestion.
Some time ago I received an email-newsletter from Glen that said there are more than 200.000 Listeners !!! So when I saw this Quality initiative, that involves a human team of volunteers making evaluations, I thought there should be some way to prioritize so that the evaluations conducted by this team have maximum impact on Quality among such a huge population. Many of which might participate little, and some, a lot.
Might be the variable to prioritize could be Activity_per_Month, measured as the month-to-month increment in activity, like:
Chats_per_month = (total chats this month) - (total chats past month).
Or
Members_helped_per_month = (members_helped this month) - (members_helped past month)
There could be filters like VL. Stars, Good Reviews, so a priority target could be Listeners with huge activity and no written reviews and/or low stars and/or no VL.This should be relatively easy having access to the server, databases, and the software that runs the site. So a simple piece of software implemented at this level would be a powerful tool for AE, or any quality control initiative.
The server knows all Listeners that have logged in, even if they keep "offline" status.
So every prioritized logged-in Listener can be contacted for evaluation, even those with "offline" status.
Prioritized Listeners that keep their status "Offline" but take General Requests, (I suppose ) might be a large proportion of the total, and Listeners that use the platform for, say, flirting, small talk or predating certain categories of members, can "better" pick their victims this way, than staying online, passively waiting for members showing up.
Thank you for including me in the discussion of such an important initiative !
I remain at your disposal,
Marcelo
I am very very excited about this program. As Donald said, and many have echoed, "Much appreciated and much-awaited." <3
@EvelyneRose I'm also wondering if the team could start with listeners with reports against them, to nip that in the bud, or will the team work up to that? I know it will be anonymous so I understand if you can't say. Will there be penalties for certain listeners who are graded very low by your team during multiple evaluations?
Thanks again for putting this all together <3
@LoyalLiz
I'm sure it's a good idea, but I'm afraid I am not exactly sure what you mean. If you don't mind, can you rephrase for me?
@EvelyneRose Sure. The team is eventually going to start doing anonymous evalutions of listeners. Will the team prioritize evaluating some listeners (such as listeners with poor ratings, reports against them, etc.) over others in the beginning, or will the team evaluate a mix of random listeners?
@LoyalLiz Thanks for the idea! My understanding is that we're only using publicly available info, so if we did this it would probably be star rating rather than reports. Maybe one way to do it would be if an evaluator is on the "Browse Listeners" page and there's a choice between evaluating either a 5-star listener or a 2-star one, we could pick the 2-star one. We'll add this to our list of things to consider!