Skip to main content Skip to bottom nav

Anonymous Evaluations Team - Update

EvelyneRose September 29th, 2020

Updated January 13, 2022: We have eliminated the Feedback Team. The AE Team still exists. I have updated the post to reflect this change

Updated July 26, 2021: If you are already a mentor and want to join AE Team, Please apply here! You can also apply for the Anonymous Evaluations Feedback (AEF) Team here! Team leads have been updated on all applications. You can only be on one team at a time, not both, so pick which one works for you best.

Please note both applications require you to be a mentor. If you are not a mentor, please apply to be a Quality Mentor here and indicate you would like to join the AE Team as the AEF Team requires being a mentor (any track) for 2+ months.

___

(Original Post)

A few months ago, we announced (here) that we wanted to bring back the Anonymous Evaluations (AE) Team. We received a lot of feedback on that post, all of which has been incredibly valuable for helping to guide our planning and testing. Thank you!!

Here are just a few examples of questions that came out of the feedback that weve been working very hard to figure out how to address:

How do we set up anonymous evaluations in a way that complements other quality projects without being redundant?

Which listeners should we contact for anonymous evaluations?

What criteria are we using to evaluate listeners?

How do we ensure that we are evaluating listeners in a fair, consistent, and relatively objective way?

Who will provide feedback and how will they provide it?

How do we give feedback in a way that will help listeners improve while remaining positive and compassionate?


At this point, we feel like we are ready to proceed with officially announcing this program has been green lit!

We will be forming criteria and an application shortly now that the process has been determined. See my end note for more details.


This is what we are planning to implement:

All listeners will be eligible for anonymous evaluations. This felt like the fairest approach to take. Other quality initiatives are in the works that we can use to focus on specific groups like new listeners. We will be doing evaluations through both Personal Requests and General Requests. Each listener will be evaluated only once.

There will be two separate teams, one for doing the actual evaluations and another for providing feedback to listeners. For now, both teams will be recruited from mentors. There will be quotas to ensure that this program reaches enough listeners to have a meaningful impact on overall listener quality.

Each evaluator will be given a separate member account that they will use only for anonymous evaluations.

For each chat, the evaluator will pick a topic that is not too intense/triggering. The topic should also be something familiar enough to the evaluator that the chat can feel realistic. For PR chats, if the listener has any topics listed in their profile, wed pick from one of those topics. For GR chats, the evaluator would pick the topic when they submit the request.

Wed expect most chats to last about 20 minutes, but it could be longer or shorter depending on how long the evaluator needs to gauge the listeners overall quality.

After each chat, the evaluator will fill out a form that has a list of possible issues that could have occurred. For each issue, the evaluator will mark whether it occurred and if so how severe it was (either minor or major). Evaluators will be trained beforehand on what each issue means and how to determine the severity level. The form will also include comment boxes where evaluators will provide details/examples for any issues that occurred and describe anything else that the listener did well or poorly.

Here is the current list of issues were planning to evaluate:

Sexual/inappropriate chat

Bullying/harassment

Underage listener

Requesting/sharing contact info

Ghosting/incorrect blocking

Empathy

Professionalism

Advice-giving

Response time

Talking about own problems

Imposing beliefs/views

For each listener that's been evaluated, someone from the feedback team will review the form responses and contact that listener by PM to let them know they've been evaluated. Well use compliment sandwiches, provide resources/links, frame feedback as an opportunity for further growth/training rather than a form of punishment, etc.

If a listener did well, the evaluator can leave a positive review. Evaluators wont ever leave mixed or negative reviews. The introductory active listening training will be updated to notify listeners that this program is happening and that listeners may receive an AE chat at some point.


The team leaders for this team will be:

@EvelyneRose - Admin oversight of the whole project

@MidnightRaven999 - AE Team Ambassador Lead Evaluator

@InternalAcceptance- AE Team Ambassador

@FrenchMarbles- Mentor Leader

What stage are we at now?

This is just an announcement post. We need to figure out team logistics first before adding anyone to the team, but all team members will need to be a Mentor (adult side) or Teen Star (teen side) to join. If you are already a Mentor/Teen Star, any track qualifies you to apply. If you are not yet a Mentor or Teen Star, you will need to fill out a Quality track application. Details below.

If you are already a mentor/teen star: We will have an application shortly that we will post in the next few weeks. As is the standard rule of all teams, just applying will not automatically get you on the team. Your app must be accepted first.

If you are not yet a mentor/teen star on any track, but would like to join this project: you must fill out a Quality Mentor/Teen Quality Star (QM/TQS) app here. You will not automatically be able to join this team just by applying. You will need to pass the Quality Mentor app first. This means that your application must go through processing and be accepted. If accepted, we will send you onto the team leaders to be trained.

Spots will be filled according to need. We will have the application ready over the next few weeks.

Please reach out to any of us if you have any questions! And as always, constructive feedback, questions, concerns, comments, etc. can go below :) I'm going to let @Midnightraven999 answer most of the questions as well so that you can get to know them :)

[updated by Evelynerose on July 13, 2022]

109
blissart September 29th, 2020

@EvelyneRose

Much needed initiative , glad it finally is rolling out

1 reply
EvelyneRose OP September 29th, 2020

@blissart

Thanks Bliss!

load more
Suryansh September 29th, 2020

@EvelyneRose @MidnightRaven999 @QuietMagic

This is definitely a great initiative. This will improve a lots of listeners to improve their Active listening skills more easily and that will help out the members to seek better support. This will be a great one as this will impact the community greatly and that would totally be appreciated. So excited to be a part of this team however this is the great program for all of us. Thanks a lot for starting this.

Suryansh

Be the change that you want to see in the world.

1 reply
EvelyneRose OP September 29th, 2020

@SuryanshSingh

Thank you!

load more
SummerBreeze00 September 29th, 2020

@EvelyneRose

A timely initiative. We want it to be a safe and supportive platform for the community.

1 reply
EvelyneRose OP September 29th, 2020

@SummerBreeze00

Yes absolutely that's the hope!

load more
September 29th, 2020

@EvelyneRose @MidnightRaven999 @QuietMagic

Here's my humble take on this initiative.

1) It's a great initiative to improve overall listener quality. Much appreciated and much-awaited

2) This should be the new VL project. No disrespect to the current VL project and team, but the current VL project has kinda lost its value in the Listener's circle. Listeners have been discussing a lot on the topic 'Verified listener doesn't necessarily mean a good listener.' With about 95% passing rate, any listener can have the VL badge. New listeners get all tensed and nervous before and during the VL mock only to find that it wasn't useful after all. So, with all due respect, let this be the new VL project or at least merge the two projects.

3) Listeners should need to apply to be evaluated. Rather than evaluating all listeners at random, only listeners who want a tag on their profile should be evaluated. (like VL badge).
Why?
Because listeners are not always at their 100%. They are known to take up chats when they are low themselves.. Even good and experienced listeners do that. And it would be unfair to judge them when they are not at their best. So, if they feel they are at their best phase, they can apply and be alert for an anonymous evaluation chat.
Which brings me to the last point

4) Listeners should be given at least two attempts at evaluation, not just one.
why? Listeners are volunteers who are trying their best to help and support people. When they make even a small mistake, they take it to their heart. Imagine how they might feel when they fail at the only chance to get evaluated. It could be crushing and could lead to adverse effects on their mental health. We shall remember that most listeners are members too. They have recently overcome or still trying to overcome their mental health struggles. It could lead to their permanent deactivation of the account.
This means that in a process to find good listeners, we might end up losing good listeners.

Thank you. I hope the team, admins, and ambassadors will consider these points while drafting the final roll-out.

20 replies
Starmedia September 29th, 2020

@DonaldDraper

Those are really good and well-thought points, Donald! Keep it up!! yessmiley

EvelyneRose OP September 29th, 2020

@DonaldDraper

Hi Donald! Thank you for your thoughts! Great ideas. The thought behind being evaluated once was we didn't want anyone to feel targeted. We thought it would be fair across the board if everyone only got it once. But perhaps if they opt in for twice that would be a consented way of doing it. We'll see what we can incorporate!


I have a few questions about this point. You said "So, if they feel they are at their best phase, they can apply and be alert for an anonymous evaluation chat."

1. Do you feel that would make them feel like they had to be on their best behavior at all times because they are expecting a chat? The idea is to see what they are actually like as a listener rather than if they know they'll be evaluated.

2. Just doing a little devil's advocate to consider all perspectives- if they did feel they had to be on their best behavior and alert for a chat, would that stress them out more just waiting for it? Or, do we think it'd be a good thing because they'd be on their best behavior with all chats? Basically just asking for your thoughts on this :)

1 reply
September 30th, 2020

@EvelyneRose

Yes, the listeners apply and consent to be evaluated within a specific duration. I am assuming that duration (let's say) of one week, wherein they put up their best.

1. Do you feel that would make them feel like they had to be on their best behavior at all times because they are expecting a chat?
Yes, it might.

It'd be more like a question in the back of the mind, 'Am I being evaluated?'. Which could lead to mental exhaustion with time as they take more and more chats. As they dont know when that chat is gonna popup.
But if they have applied for evaluation, they know that they have to be on their best behavior for one week during which they could be evaluated.

2. Just doing a little devil's advocate to consider all perspectives- if they did feel they had to be on their best behavior and alert for a chat, would that stress them out more just waiting for it? Or, do we think it'd be a good thing because they'd be on their best behavior with all chats? Basically just asking for your thoughts on this :)

This feels mostly like the first question. As I said, the waiting period after the evaluation would be limited. Besides, since they have applied to be evaluated, they are prepared to be alert for one week and give their best. Until they don't feel ready for it, they wont apply.

load more
TaranWanderer September 29th, 2020

@DonaldDraper I feel as though points 3 and 4 should only apply if this is how to evaluate someone for being verified -- which I think should be a separate thing than anonymous evaluations. Yes, maybe the VL process should be revamped, but we should also still have anonymous evaluations. The point is that "bad" listeners are not going to sign up to be evaluated, but still do need to be "caught"/ noticed in order to increase overall listening quality (I don't like the harsh wording but my brain won't work with me today :P). The whole point of this project is overall quality, which has repeatedly been mentioned as being important by members & listeners, so I think this is a great step towards that, and can open up other ways to improve quality, such as updating the VL process too in the future

@EvelyneRose I also liked your comment about listeners should always be trying their best :) I would agree, I can understand the point was, it wouldn't be fair to judge someone for a badge/ being verified without them knowing, but it also relates to overall quality and taking the listening role seriously. Listeners are allowed to have bad days, and on those days, shouldn't listen :) When you put your listener hat on, you should be in the right mindset, emotionally ready, and committed to giving your time. If you're not feeling that, don't take on chats and take time for yourself instead. That's a pretty good way to look at listener quality... why wouldn't you always want to be trying your best for members?

8 replies
load more
kindSoul10 September 29th, 2020

@DonaldDraper I doubt this can replace the VL project. Since anonymous evaluation has a random approach it will miss listeners that genuinely want to receive feedback.

Anonymous evaluations can be also intimidating to some people. It's already stressful for beginners to receive coaching mails that have been triggered by (genuine) anonymous members by filling in critiques.

2 replies
load more
TortueDesBois September 29th, 2020

@DonaldDraper
we need a way to report listeners who take and abandon chats, of course, in VL chats we can't see it. lot listeners take chats and abandon it very quickly, post 0 or 1 word and leave, or even block right away. or those who are being inappropriate, trying flirting or dating, or offer offsite contact, This project is there for identifying listeners who should not be here. since often they block before the member can report them.

additionally Listeners should always be in their best mood when they take chats, not only when they know they are being evaluated. If they are not, but still want be helpful they could work on other projects, support in forums where its less demanding (we have a lot of need reply posts), or work on other projects (write new discussions, leading sharing circles less demanding than active listening, etc) or use a member account.

We also want offer early coaching (it would not be a reporting thing for those who just struggling with active listening skills but need additional coaching, only listeners being very inappropriate would be in "trouble" -gaining behavior points resulting of it)

1 reply
load more
QuietMagic October 2nd, 2020

@DonaldDraper

Thanks very much for the feedback!

I definitely like the points you're making that listeners are people with feelings, many listeners are also members, and getting evaluated (or the anticipation of a future evaluation) can be very stressful. Good listeners can still have "off" days/chats sometimes, even if they're practicing self-care and only listening when they feel up for it.

For those reasons, we're definitely trying to set up the process to be forgiving. I know one goal of Evelyne's first thread from a few months ago was to get ideas on how to balance a couple priorities:

1) We want to help listeners improve so that members receive the high-quality support they deserve

2) But we also want to be gentle and compassionate when giving feedback so that listeners don't feel scared, judged, discouraged, or like they're being treated unfairly.

We'll almost certainly encounter some listeners in AE chats who clearly have bad intentions and don't belong on the site. But, we'll probably come across a lot more "bad" listeners who want to be helpful, are trying their best, but maybe just don't have the skills or understanding yet to be able to provide effective support. If someone's in that second group, our goal is to help train them so that members can benefit. And of course if someone has a great AE chat or does a good job at something, we want to let them know too so they'll feel confident/encouraged and keep doing what they're doing.

***

We can give some more thought to possibly having listeners apply for AE. I think our plan up to this point has been to not leave it up to the listeners, for a couple reasons:

1) If listeners are able to apply or opt out, then listeners who are really bad or knowingly doing inappropriate things would probably just never apply, so AE wouldn't have as much of an impact on overall listener quality.

2) If a listener knows that they're going to be evaluated, they might act differently. What makes AE a bit unique/different from VL is that we get to see how listeners act when they aren't "on their best behavior".

Doing multiple evaluations is an interesting idea! I personally like the thought of being able to re-evaluate listeners to see whether AE is actually helping them improve.

2 replies
load more
load more
brilliantTurtle89 September 29th, 2020

@EvelyneRose

Great idea. I'd love to get involved.

1 reply
QuietMagic October 2nd, 2020

@brilliantTurtle89

Awesome, thank you! The feedback team application is now available for mentors and the evaluations team application should be available sometime within the next couple weeks.

load more
calmLight1263 September 29th, 2020

@EvelyneRose

This is really a good initiative. Much appreciated.

However, I would like to point out what if any of the listeners fill out the evaluation forms post the evaluation process, on basis of their notions that could be flawed (no offence). Though they will be trained about the whole process but we, human beings, often hold biases. So that makes me question how authenticity of the evaluators' remarks will be ensured. (Just a question out of curiosity)

5 replies
Laura86539 September 29th, 2020

@calmLight1263

I thought the exact same thing, realistically I feel that even if they were anonymous there are some moderators I do not get on with and I feel this may taint results. It's just my opinion though! I wonder if there is a way for them to combat this?

1 reply
calmLight1263 September 29th, 2020

@Laura86539

Thanks for finding it relatable. :)

load more
QuietMagic October 2nd, 2020

@calmLight1263 @Laura86539

Thank you for the feedback! I'm not sure if this completely addresses/resolves the concerns, but here are a few things I can think of that we're doing to try to mitigate bias:

  • As Evelyne mentioned, the evaluation form won't just be "rate this listener on a scale from 1 to 5". We've identified about a dozen types of issues that could occur, and evaluators will be trained on how those issues are defined.
  • The evaluation form will include text fields where evaluators are expected to give details/examples about what happened in a chat. This is so we're able to give the feedback team enough information to work from when they reach out to listeners.
  • We're currently only recruiting for AE from mentor track listeners. This doesn't necessarily make everyone objective (lol), but at the very least it does mean we're picking from listeners who have a certain level of trust/experience/skill with active listening.
  • The evaluation form will ask for the evaluator's listener name. So if heaven forbid we had something like one evaluator consistently leaving fake negative evaluations, we'd be able to identify that.


Maybe we could ask/train evaluators to "recuse" themselves if they happen to come across a listener where they know there's some personal history that might creep in and bias them in either a positive or negative direction. Thoughts? Other suggestions?

2 replies
EvelyneRose OP October 3rd, 2020

@QuietMagic

I was thinking maybe they can request one with admin aka me or one of you as a redo

1 reply
QuietMagic October 3rd, 2020

@EvelyneRose I was thinking more along the lines of how to minimize potential bias for all chats, but that makes sense what you're saying that if a specific listener complained about potential bias, they could possibly request a redo with one of the team leaders.

load more
load more
load more
bouncyVoice4149 September 29th, 2020

@EvelyneRose

Hi! Love this intiative :) I just have a question Will all listeners receive a chat at some point or is there a form we need to fill out if we want to be evaluated?

5 replies
bouncyVoice4149 September 29th, 2020

if someone from the team sends us a PR and we aren't online, how will that affect being evaluated?

4 replies
TaranWanderer September 29th, 2020

@bouncyVoice4149 I'm assuming they would only go for people who have their status as online :) and let's say you miss the request, I don't think that would count as an evaluation at all. Although that's just my guess :) I don't think the AEs are meant to trick anyone or not give people a fair chance

3 replies
bouncyVoice4149 September 29th, 2020

@TaranWanderer thank you :)

1 reply
MidnightRaven999 September 30th, 2020

@bouncyVoice4149 hey there, you can just appear online and we will try to evaluate you (we have noted that you would like to be evaluated!)

load more
QuietMagic October 2nd, 2020

@TaranWanderer @bouncyVoice4149 Yup, that's correct that we aren't planning to send PRs to listeners who aren't online. And yes, agree that the goal of this isn't to try to trick listeners into "failing". Slightly smiling

load more
load more
load more
HopieRemi September 29th, 2020

@EvelyneRose

Congrats on those that got into this team! Very happy for you all. This is a great program. I am sure it will help quality. laugh

2 replies
EvelyneRose OP September 29th, 2020

@HopieRemi

Thank you!

1 reply
HopieRemi September 30th, 2020

@EvelyneRose

you are welcome!

load more
load more
Starmedia September 29th, 2020

@EvelyneRose @MidnightRaven999 @QuietMagic

Donald has already stated some really good points!! yes heart

Considering the points Donald made, I would like to add few ideas for AE:

1) There can be three teams within Anonymous Evaluators Team:

Team 1:

1. Who evaluate listeners randomly (whether verified or not). This team can be considered as an independent team (not related to Team 2 and Team 3) coming under Safety Domain.

2. Evaluations can be done based on the minimum expectations from a listener. This team can go through some basic training to evaluate based on the basic 7cups guidelines.

3. Listener who don't perform well can get trained through a Listener Coach and can get re-evaluated later. Listeners who have successfully completed AE can mention on their profile with some code to avoid confusion.

Team 2 and Team 3 can be a part of Verified Listener Project ( Maybe coming under Quality domain?)

Team 2

1. Who focus on evaluating the listeners who have applied for VL badge.

2. This team can go through some extra training along with the basic training to identify a "Satisfactory listener". A satisfactory listener can be identified on the basis of cetain criteria (More Empathetic, Commitment, Positivity, Reflection, Helpful etc.) along with the minimum requirements.

3. All the listeners who don't qualify the satisfactory listener criteria but are interested in the VL badge can be made to go through VL Coaching (with compulsory Mock tests). And then, everyone who made progress with the coaching (meeting the requiements) can go through the actual VL mock test by a Verifier. (and maybe can mention on their profile with AE date and unique code, so that AE don't need to evaluate them again)

Team 3:

1. Who focus on evaluating listeners who are already verified. (Including Leaders who are taking member and peer/chat support chats)

2. This team can go through the same basic + extra training as Team 2.

3. Already verified listeners who don't meet the "Satisfactory Listener" criteria can have their badge on hold and go through the VL coaching. Everyone who have benefited from the coaching making progress and can have their VL badge back (and maybe can mention on their profile with badge regain date, or AE date + unique code, so that AE don't need to evaluate them again)

Note: Listener Coaching can be considered for basic minimum requirements while Verified Listener Coaching can be more focused and serious for Quality Listening experience.

2) Listeners who don't perform well in AE deserve multiple attempts at AE.

For the ones who don't perform well in AE

Everyone has the potential to learn and grow. Just one evaluation should not determine a person's ability as a Support Listener. So, if the listener genuinely feels that they performed well then they can be evaluated by someone else from the AE team. And, if the listener admits that they didn't perform well (or if they really didn't perform well with proofs/explanation) then after getting coached they deserve a multiple attempts at AE as well.

Thank you so much bringing AE back! I really appreciate it. heart

I hope the team considers these ideas before implementing AE Project/Program. smiley

9 replies
Starmedia September 29th, 2020

@Starmedia

Sorry for the typos!! angel

TaranWanderer September 29th, 2020

@Starmedia I like your points and the possibility of having a few different teams/projects (especially checking in on listeners who already have the VL badge). They certainly seem like big projects each on their own, which would probably take time to develop, so I think at least the AE ones on their own are a good starting point :) I wonder, what would be the real value of doing a second AE on someone? In my perspective, the AEs are sort of ways to find 1) listeners who are just 100% not genuine or 2) listeners who just need some improvement/reminders. If someone falls under 2, there's nothing wrong with that. We can all grow and learn, and I don't think an AE that results in a "coaching moment" is a bad thing or will forever label that listener in a negative way. Instead, it seems like a way for the listener to learn more, rethink their approach, or whatever the improvement is, and move forward as a better listener on their own. In my opinion, you don't have to prove yourself to anyone but the members you give your time to, and they deserve the best you can give :) A second chance to me just seems like trying to show the AE team you've improved, or you can do better, which is great, but ultimately feels unnecessary (especially since there are just so many listeners, and I'm feeling like the team wouldn't be that large, second evals may take away from reaching a wider audience). Also, I'm sure the team would have some way of tracking who had been evaluated, so the code in bios doesn't seem necessary (and this way, no one feels like they've been labelled or more importantly, that they've missed out on showing they can do well)

3 replies
Starmedia September 29th, 2020

@TaranWanderer

Hey Taran, thank you so much for your thoughtful reply. I really apprciate your effort. smileyheart

I would like to clarify few things:

1) I totally agree that these 3 teams can be considered as big projects, starting with the regular AE is fine, which is similar to the Team 1 - who randomly evaluate everyone focusing on minimum 7cups criteria. (While we can have something similar to Team 2 and Team 3 in progress and give them some time to develop based on the resources) yes

2) I still feel multiple evaluation is necessary with some months gap for 2 reasons 1) to just keep a check on how much the listener has made progress.(If in case they didn’t benefit too much from the coaching due to different reasons like => if the coach was not helpful enough, or if the listener themselves are not taking their role seriously) heart

3) Of course if they have better way of tracking, then that's great! And, it need not be a code (Code was just an example), even a badge or something else would do. smiley

2 replies
TaranWanderer September 29th, 2020

@Starmedia that makes sense, a second evaluation would be good if the listener didn't benefit or had gone from being "good" to not taking their role seriously later on (I've seen happen..). Maybe a listener could be put back into circulation for an AE after a certain amount of time has passed...which might depend on the team's capabilities :)

1 reply
Starmedia September 29th, 2020

@TaranWanderer

Yes, agreed! heart I hope you're safe and well, Taran! Take care of yourself and have a good day/night. heartsmiley

load more
load more
load more
Illusionaryworld September 29th, 2020

@Starmedia

Great answer Starmedia, loved the ideas!enlightenedenlightenedenlightenedheartyesenlightenedenlightenedenlightened

1 reply
Starmedia September 29th, 2020

@Illusionaryworld

Glad you liked it, thank you!! heartsmiley

load more
QuietMagic October 2nd, 2020

@Starmedia Thank you for the suggestions! We'll add those ideas to our list for consideration (if not in the short-term then maybe as future possibilities once we have the basics down pretty well):

1) Multiple AE teams with different scopes/target groups of listeners and possible interconnectedness with VL
2) Multiple AE attempts per listener
3) Possible re-evaluation after X months.

I was thinking a bit about the three teams you mentioned and came up with a mini-model of the dimensions of it. Have highlighted what I think our current AE plan would be:

1) Target group: Team 1 = all listeners, Team 2 = VL applicants, Team 3 = VL
2) Evaluation criteria: Team 1 = minimum expectations/safety, Teams 2-3 = satisfactory listener

The evaluation criteria is a little fuzzy to me. Slightly smiling If we go with the current version of AE, we'd evaluate everyone based on similar standards, but the feedback might vary depending on what sort of listener we're interacting with. Like, some listeners might need feedback on major issues ("minimum expectations") while other listeners might just have minor tweaks ("satisfactory listener").

1 reply
Starmedia October 4th, 2020

@QuietMagic

Thanks for responding back, Magic! smiley

And, you're very welcome. That's a very good summary of my suggestions. I agree with everything you said! heart

load more
load more
FineFrog15 September 29th, 2020

@EvelyneRose

I enjoy reading all of the posts. I guess I wouldn't have a problem with it all, the concept of people that have nothing to hide, hide nothing. Having the anonymous aspect of it all could be helpful in that the listener doesn't get all worked up and nervous before knowing they are being graded. I would think this would allow the chat to unfold organically. But also the thought of, oh yes, we are coming for you, so you better tighten up in every chat because big brother is watching, haha. The chat topics sound fair based on level. I agree with the comment earlier based on personal bias if the grader has had a past history with the listener and this could skew the outcome if a personal decision was made to make things more difficult for the listener and change the way things are phrased and then wrongly report. A pre set form sounds good on how listeners should be graded, just as a proper job interview would be done to help with fairness and so that part of it does sound helpful if everyone can ethically stick to it.

If this is done in a fair manner, it sounds so very easy to me. Even in basic training aren't listeners asked to ask open ended questions, ones that require an answer more than yes or no? Reflect back to the member what they said but in their own words so that the member knows they were understood? Validating the the thoughts and feelings of the member? This particular grading system does not sound difficult to me as those concepts are not even included and I think it's for good reason. Not every chat will allow for that each and every time. It's situational dependent.

This sounds vastly different from the VL program when one is asked to perform and prove that they know these concepts. This program sounds much different and involves not only quality but safety. It seems obvious to me that safety would fall under the direct line and interest of quality initiatives.

One point of interest that I noted was "talking about own problems". This would be vague for a grader at times based on if they were chatting with a newer listener or a highly seasoned listener. Even graders might not know, what they don't know and might not have even had the opportunity to be trained in a certain way and to have read more of the literature that 7 cups provides. 7 cups tells listeners that they are allowed to contribute with their own past issues and only if it has meaning that is relevant to the member and only if the listener is able to contribute in a way that will tell a "hero" story. One that brings hope to others, sameness, shows the member that the listener truly understands but doesn't dwell even on that overly long, but quickly reverts back to keeping focus on their member. *shrugs shoulders* This would fall under the category of talking about ones own problems.

Second point of interest could be related to language barriers. Even if a grader thinks they have full command of the English language but are English as Second Language (ESL) this could create problems for a listener being graded that has English as their first language. There are certain terms, ways of speaking, cultural differences, etc. And this is one of the reasons that 7 cups has the language listed for members in the grs que. This holds true for any language and cultural differences and graders should only interact with using their first language and interact with the person that they are grading that uses this language as their first language.

Hoping this was somehow helpful.

4 replies
TaranWanderer September 29th, 2020

@FineFrog15 I agree, I think it's a good initiative that obviously has small details to figure out. I don't think there should be any "fear" of being evaluated, since it's just for your own improvement. Seems nicer to have a conversation about improvement than have members giving negative reviews on your account :P

About bias in evaluators...my guess should be that if an evaluator knows they have a bad past with/opinion of a listener, they shouldn't be the one evaluating them. Seems like a simple thing to work around, kind of like jury duty. And, I think the evaluations don't sound like they're set-in-stone "what I write is law", but more of a record of how they felt the chat went (which will always be the case, even in real chats, it's all subjective), and then a completely separate person reads through it and talks with the listener about how the chat went, wherein they may clarify anything, or just take it as someone misunderstood you (which happens in real chats too!).

I also don't think there's any worry about the "talking about self" part, graders should easily know the difference between sharing your success story to the benefit of the chat, and making the chat about you and having the member put into the listener role (I've had this happen as a member...it's one thing to say "I've gone through something similar, I believe in you" and a complete different one to say "I'm really struggling with x and don't know what to do :("). Everything has context :)

Language barriers might pose an issue, but in general, this can happen in regular chats too, and that's okay. As long as the listener is generally following the guidelines, even language barriers/simple misunderstandings shouldn't be an issue. I don't think these AEs are going to be super harsh, but more looking for big issues

1 reply
FineFrog15 September 29th, 2020

@TaranWanderer

Hopefully you are correct

load more
QuietMagic October 3rd, 2020

@FineFrog15 Thanks for the feedback!

You're right that that exact wording "talking about own problems" is a little bit unclear. It was just a little shorter than saying "monopolizing the chat with their own problems". What @TaranWanderer has said below about this is the way we're seeing it. Empathic self-disclosure that doesn't distract from the member's issues is fine and can be really helpful. Unloading problems onto a member so that the roles are reversed and the member is begrudgingly caring for the listener instead of talking about what they want to discuss is what we're looking to evaluate.

I can understand what you're saying that maybe an evaluator who isn't familiar with a language might struggle with a chat and then attribute it to the listener. Maybe we could ask evaluators to only chat in languages where they are fluent?

1 reply
FineFrog15 October 3rd, 2020

@QuietMagic

I think you have your finger on the pulse! Absolutely! I wouldn't want a listener to monopolize my time in that way, unload their issues on me, make me feel even worse than I already did if I was seeking support and reverse roles on me to the point I had to support them. It seems like many things are interconnected around here, even the concept of listing "lived experiences" on ones profile page. If a member feels a likeness, familiarity for chat topics, this could be a go to listener for them, but not if the listener chooses to "unload" on them and make the member responsible for them. The "hero story" would be key for the listener, get in and get out quick, don't dwell but offer hope and continue to focus on the member. I'm glad you understand the concept of what 7 cups says is okay and doing it the right way is a huge bonus for the member. Teaching graders this concept would be key in your process for quality assurance.

I'm glad you like my idea related to language and cultural barriers, this could happen. Some people understand diversity, seek to understand those concepts and learn in an effort to help everyone around here and some people don't and so it truly is the fault of no one. It's not a requirement for listeners to seek understanding of all cultures, phrases, ideas and concepts so in my mind, ensuring a grader is the right fit for the one being graded is just as important as the listener being the right fit for a member as to lessen false negatives.

load more
load more