Anonymous Evaluations Team - Update
Updated January 13, 2022: We have eliminated the Feedback Team. The AE Team still exists. I have updated the post to reflect this change
Updated July 26, 2021: If you are already a mentor and want to join AE Team, Please apply here! You can also apply for the Anonymous Evaluations Feedback (AEF) Team here! Team leads have been updated on all applications. You can only be on one team at a time, not both, so pick which one works for you best.
Please note both applications require you to be a mentor. If you are not a mentor, please apply to be a Quality Mentor here and indicate you would like to join the AE Team as the AEF Team requires being a mentor (any track) for 2+ months.
___
(Original Post)
A few months ago, we announced (here) that we wanted to bring back the Anonymous Evaluations (AE) Team. We received a lot of feedback on that post, all of which has been incredibly valuable for helping to guide our planning and testing. Thank you!!
Here are just a few examples of questions that came out of the feedback that weve been working very hard to figure out how to address:
How do we set up anonymous evaluations in a way that complements other quality projects without being redundant?
Which listeners should we contact for anonymous evaluations?
What criteria are we using to evaluate listeners?
How do we ensure that we are evaluating listeners in a fair, consistent, and relatively objective way?
Who will provide feedback and how will they provide it?
How do we give feedback in a way that will help listeners improve while remaining positive and compassionate?
At this point, we feel like we are ready to proceed with officially announcing this program has been green lit!
We will be forming criteria and an application shortly now that the process has been determined. See my end note for more details.
This is what we are planning to implement:
All listeners will be eligible for anonymous evaluations. This felt like the fairest approach to take. Other quality initiatives are in the works that we can use to focus on specific groups like new listeners. We will be doing evaluations through both Personal Requests and General Requests. Each listener will be evaluated only once.
There will be two separate teams, one for doing the actual evaluations and another for providing feedback to listeners. For now, both teams will be recruited from mentors. There will be quotas to ensure that this program reaches enough listeners to have a meaningful impact on overall listener quality.
Each evaluator will be given a separate member account that they will use only for anonymous evaluations.
For each chat, the evaluator will pick a topic that is not too intense/triggering. The topic should also be something familiar enough to the evaluator that the chat can feel realistic. For PR chats, if the listener has any topics listed in their profile, wed pick from one of those topics. For GR chats, the evaluator would pick the topic when they submit the request.
Wed expect most chats to last about 20 minutes, but it could be longer or shorter depending on how long the evaluator needs to gauge the listeners overall quality.
After each chat, the evaluator will fill out a form that has a list of possible issues that could have occurred. For each issue, the evaluator will mark whether it occurred and if so how severe it was (either minor or major). Evaluators will be trained beforehand on what each issue means and how to determine the severity level. The form will also include comment boxes where evaluators will provide details/examples for any issues that occurred and describe anything else that the listener did well or poorly.
Here is the current list of issues were planning to evaluate:
Sexual/inappropriate chat
Bullying/harassment
Underage listener
Requesting/sharing contact info
Ghosting/incorrect blocking
Empathy
Professionalism
Advice-giving
Response time
Talking about own problems
Imposing beliefs/views
For each listener that's been evaluated, someone from the feedback team will review the form responses and contact that listener by PM to let them know they've been evaluated. Well use compliment sandwiches, provide resources/links, frame feedback as an opportunity for further growth/training rather than a form of punishment, etc.
If a listener did well, the evaluator can leave a positive review. Evaluators wont ever leave mixed or negative reviews. The introductory active listening training will be updated to notify listeners that this program is happening and that listeners may receive an AE chat at some point.
The team leaders for this team will be:
@EvelyneRose - Admin oversight of the whole project
@MidnightRaven999 - AE Team Ambassador Lead Evaluator
@InternalAcceptance- AE Team Ambassador
@FrenchMarbles- Mentor Leader
What stage are we at now?
This is just an announcement post. We need to figure out team logistics first before adding anyone to the team, but all team members will need to be a Mentor (adult side) or Teen Star (teen side) to join. If you are already a Mentor/Teen Star, any track qualifies you to apply. If you are not yet a Mentor or Teen Star, you will need to fill out a Quality track application. Details below.
If you are already a mentor/teen star: We will have an application shortly that we will post in the next few weeks. As is the standard rule of all teams, just applying will not automatically get you on the team. Your app must be accepted first.
If you are not yet a mentor/teen star on any track, but would like to join this project: you must fill out a Quality Mentor/Teen Quality Star (QM/TQS) app here. You will not automatically be able to join this team just by applying. You will need to pass the Quality Mentor app first. This means that your application must go through processing and be accepted. If accepted, we will send you onto the team leaders to be trained.
Spots will be filled according to need. We will have the application ready over the next few weeks.
Please reach out to any of us if you have any questions! And as always, constructive feedback, questions, concerns, comments, etc. can go below :) I'm going to let @Midnightraven999 answer most of the questions as well so that you can get to know them :)
[updated by Evelynerose on July 13, 2022]
@EvelyneRose
Ooh, I honestly love the idea of anonymous evaluations a whole ton, and I am so glad to see it being rebooted (: Also, thank you for all your hard work Evelyne!! So, anyway, I have been putting some thoughts into this, and I actually have a few questions about it as well about the whole team!
All listeners will be eligible for anonymous evaluations. This felt like the fairest approach to take. Other quality initiatives are in the works that we can use to focus on specific groups like new listeners.
We will be doing evaluations through both Personal Requests and General Requests. Each listener will be evaluated only once.
The first part I wanted to address was actually the first part about which listeners should be contacted. I love the idea that the whole anonymous evaluation team will focus on all listeners instead of just a specific group; however, I was wondering since the evaluations are only through Personal Requests and General Requests, it is also pretty common that most listeners that set themselves online or accept general requests are mostly new listeners (though there are definitely a lot of more experienced listeners as well!), and many listeners have booking forms now as well, I was wondering if the AE team might expand to booking forms? I for one would love to see how I do if someone were to anonymously evaluate me, but unfortunately, I do not often put myself as online, and I mostly do scheduled or offline messaging. The idea is that it would be wonderful to evaluate listeners who have been here a bit longer and are not often online-status-wise but do actively listening through scheduling. This could definitely come later since everything might need to settle down a bit and general and person requests should definitely be the focus (since most members are supported through that way) (:
Also, I think this was mentioned several times, but I would just like to say that giving listeners 2 chances at the evaluation may be good for the following reasons:
1. As Donald mentioned above, sometimes listeners aren't at their best all the times, and it would be unfortunate to only get once chance. However, at the same time, I firmly do think that we should not listen when we aren't at our best because we really want to support every member the best way that we can with a certain level of quality, and if we are not feeling so well in the first place, providing bad quality support is really a no-go thing. However, I will say that we definitely have our own fair share of chats that went well (maybe because we understand the topic more, etc), and having a 2nd chance is an awesome way!
2. One of the questions that the post mentioned that the AE team wanted to address was "How do we ensure that we are evaluating listeners in a fair, consistent, and relatively objective way?" By having a second chance with another evaluator is a great way to make it more objective. I am sure that every listener and evaluator all have our own biases no matter how hard we try not to, and it really is just natural, human nature. However, one way to address this is to have a different evaluator! (it might be a bit like the VL project when a person does not pass the mock chat the first time)
3. Main issue: It really comes to the point of would we rather have quantity or quality for the 2 chances. I am assuming since the evaluators are mentors/teen stars, there will be a lot of listeners they have to evaluate. If we do add a 2nd chance, it really comes to the question of would we rather have 2nd chances (to learn and grow from mistakes for listeners who do not pass the first time) or quantity (with more listeners evaluated). and to this, I really don't have an opinion on either side because it really is a give-or-take. Either way, perhaps it might also be better to have 2nd evaluations a bit after everything settles down?
I would really like to end this comment by saying that I am, once again, so glad that an anonymous evaluations will go into effect, and I wish the whole team the best of luck!!
@DayDreamWithYou Thanks for the feedback!
I'd imagine it would be pretty doable to add booking forms to our evaluations at some point since it would basically just be a scheduled PR chat. The only thing we might have to consider is how to go about finding listeners who have booking forms, since they wouldn't be in the GR queue and wouldn't be listed as online. Maybe a search in "Browse Listeners" for some common booking service keywords would do the trick.
And thanks for weighing in on the multiple evaluations question. That's a good point that if we do end up doing multiple evaluations, having a new evaluator do the 2nd one could help with objectivity and fairness. Agree that there are quantity/quality tradeoffs to think about, whichever way we end up going. (Do we want to reach as many listeners as possible at least once? Or reach fewer listeners but have the process be a bit more thorough?)
I am a listener. I was never informed about this. Maybe I was but It was not something I remember. I hope when a listener reaches some level , they are automatically given a notification. It would be better.
@Actuallynobody017 I agree. This scheme should be made transparent, as far as possible.
But has it actually started yet?
Before it starts, I would hope to see it mentioned in our Active Listening training and in the LISTENER PRIMER 2021.
Charlie
@RarelyCharlie According to a couple of sources, it has already started, with no update here in the update thread and no announcement anywhere, apparently.
When a listener is evaluated, and there's no official announcement they can look up, how do they know they're not just being trolled?
Charlie