Just-In-Time-Training (JITT) / Noni Help
Just-In-Time Training (JITT): Limited and Careful AI to Empower Human Connection and Support
On 7 Cups, our goal is to provide the tools and and the space for people to support each other through their suffering. We help people through active listening, group support, therapy, growth paths, mindfulness, forum posts and so much more. Over the last 10 years, we have learned so much and made so much progress toward this mission. Currently, we provide 11 interventions to help people better cope with and overcome mental health challenges. The power of scaled up support with the help of technology, on a global scale, has been clearly demonstrated by our compassionate community which continues to evolve.
We want to decrease human suffering for free or at little cost. We believe in mental health for all. Anyone that needs help should be able to receive it regardless of language, location, or financial status.
Active listening is the core foundation of our mental health for all vision. It is a set of principles that turns communication into honest and empathic conversation paving the way for support in our daily lives. This leads to human connection on a new level, one that supports healing and wellbeing.
When we provide this support, however, it is sometimes easy to get stuck. For example, a member says something that is difficult to respond to, and you want to respond in a caring way, but you are just not sure what to say. The member also knows you care for them and want to help them. When this happens, we can freeze or maybe we say something that isn’t super helpful. It can take the conversation in a direction that is less healing.
Another important thing to know about the skill of listening is that it is normally distributed. There are some people that are excellent listeners, most people are average listeners, and then some people that struggle to be good listeners. The vast majority of listeners want to be good listeners. If a person signs up on this platform and goes through the work of training, as a volunteer, they are clearly here to be the best listener they can be. But given the distribution, having varied tools can only help.
With the rise of AI over the last decade, we began to think more about the role of this technology in facilitating human connection, healing, and growth. Noni, our AI chatbot launched in 2014, was trained and informed by our community and maintained our values. By 2017, Noni was able to greet users, participate in a group discussion while uniting group members, as well as contribute with positive humor. And, more recently, Noni just went through another series of iterations.
With the latest wave of AI models, we began probing a question that could update what our current support system looked like. What if the latest technology could help everyone wanting to train to be a better listener? We know that anyone who volunteers to listen and trains with us is already very dedicated. This motivation is usually followed by a long pathway of training at the 7 Cups Academy, but what support could we give these active listeners to start their journey right away? Listening is a skill that we can improve on continuously, and so learning something new every day stands as ground for ongoing practice.
There is a sort of training called “Just-In-Time Training” (JITT). This is training that provides the person with the exact information they need at exactly the right moment. Contrast this with taking a course and then remembering and applying what you learned in that course 2 or 3 months later. Both types of training are needed to help people grow professionally.
In the near future, new listeners that want help getting unstuck with just-in-time training, will have the option of clicking a “Noni help” button that says “suggest a response.” AI will then draft a response to the member’s message and drop it in the listener’s message composition area. The listener can then edit the response and hit send when ready.
AI can help us do a better job of decreasing suffering for people that are in pain by empowering human connections based on the core principles of active listening for support. And better yet, this application of AI can be an ethical demonstration of a key debate running through everyone’s mind: Can Humans and AI work together? Collaboration helps everyone move forward together, and with the 7 Cups model in mind, we aim to set an ethical example on how to use AI to productively scale up emotional support.
What Led Us To Need Just-In-Time Training?
Up to this point, our focus has been on foundational training methodologies and tools, which we have developed in the Academy.
We have now trained 500,000 listeners. Collectively, those listeners, and some members, have completed 932,544 courses, and we have helped millions of people. That is a remarkable amount of training! We are almost at 1 million courses completed and the best part is that all of this has been done for free. We do dozens of other things as well to increase the quality of listening on the platform. Our community leaders work very hard on this front. We have come very far, but we have a lot of work left to do.
Each week I get PMs about how much a listener has helped a member or a note that people found 7 Cups and we helped them turn their life around. These are my favorite messages to get. They make me proud of the work we do.
I also get messages from people that did not feel helped. They wish we would have done a better job. Mental health is a complex and complicated issue, and while we have made significant strides, I know that we have a long way to go. Every quarter we do better, but there is always room for improvement.
We are now adding these just-in-time training tools as a way to better equip our listeners and to help integrate learning from courses into real-time conversational support.
The Problem is the Path
At 7 Cups, we believe that the problem is the path - facing challenges helps us find solutions that enable us to stay on the right path.
As we conceptualized this tool, we had to ask ourselves the following questions to ensure what we are doing is as helpful as possible to different groups of people.
Should we use AI to provide just-in-time training to help listeners?
Should members be able to access listeners that are receiving just-in-time AI training support?
Should members have the option of chatting with listeners where they know that the listener cannot and is unable to use just-in-time AI supported listening?
The answer to all of the above is “Yes.” People can choose to engage in AI training and support or choose not to. It is entirely up to them.
Towards Better Active Listening (Humans + AI)
Just-In-Time-Training will be available to all new listeners and some leaders to help with testing. When a member connects with a listener, the member will be notified that the new listener has JITT as an option in a pinned message at the top of the chat.
When a new listener utilizing JITT clicks on the lightbulb icon/button, the AI creates the response through querying something called a large language model. You can think of this as a program that has ingested most of the human text on the Internet. They then convert all of those words and sentences into numbers and then use formulas to predict the right string of words that offers a strong response to the member message. We have informed the formulas with what it means to be an effective active listener, so that the suggestions are fully compliant to 7 Cups protocols as well as conduct policies.
We will very likely limit the use of this tool after a certain amount of time, because we see it as a tool to help us strengthen our active listening and empathy muscles. We do not want to become overly dependent on it. We do not yet know what the optimal amount of time is for people to access it. Our initial plan is to roll it out with new listeners, learn, and then expand to more listeners if it makes sense.
Also, listeners cannot copy and paste the response. We have disabled this capability and they are prompted to use it for inspiration, but to make sure they send the message in their own words, because human to human connections is key.
Leaders are also helping us test JITT so we can learn more quickly. If you are a listener that has been on the platform and would also like to help us test JITT, then please sign up here. If you are a member, chatting with an older listener or leader, and you want to see if they have JITT turned on, then look at the pinned messages at the top of the chat to verify.
There is lots for us to learn over the next several weeks.. I’ll keep you all posted.
Community Collaboration and Benefits
It is important to note that this just-in-time training when stuck feature will need tweaking. That is why we are implementing this in a very careful way. Listeners are primary and human to human connection is central. If a listener is stuck and clicks that “suggest a response” button, but is not happy with the message, then they are free to ignore, delete or revise it. While our admin team, and professional experts, have been working hard to train the just-in-time training to provide the best suggestions, we will be collaborating with the community and using all feedback to continue to tweak our JITT feature.
We have been thinking about AI and mental health since 2014. That is when we introduced Noni. I believe that humans + AI will enable us to decrease suffering. AI will only reach its full potential when it is implemented in a way that helps humans reach their full potential. And, I believe that members and listeners should have the ability to choose to interact with AI as much as they like - a lot, or little, or not at all (just like what we do with Noni now). Both of these can be true. They are not mutually exclusive.
We have always said that we use technology to facilitate human connection, healing and growth. Technology - like a website or app - is a medium much like a film or book. You can use the medium of film or text for good or bad. The same is true for technology and is true of the technology that we call AI. On 7 Cups, technology is a facilitator of human connection and is never a replacement. Listeners are and will always be primary (see this post from 2016).
7 Cups is a special place for a myriad of reasons. We are a community of the best people on the planet. The people with the biggest hearts that go the extra mile to help one another. Many of us are old souls. We’ve been through some ups and downs and we are real about how life unfolds. We don’t pretend to have it all figured out. We know healing is a process.
The Future of Active Listening is the Best Future of AI
As a reminder, AI or other advances in technology will not replace any of our systems but will help in our efficiency. As technology evolves, we continue to invent tools that make our lives better - our growth paths are an example. Now, we have hundreds of growth paths filtered with topics, but how can a listener recommend an activity, prompt or tool for a unique person as a means of support? It would be a great next step to have ways to personalize our wellness journey - and AI will do just that. It is a paintbrush that respects unique human contexts, and as we embark on this path, we will find that it helps turn our needs into a reality.
Please share ideas, thoughts, reflections, and concerns below. I’ll do my best to answer them. Let’s also keep adding here or in some other dedicated section of the forum so we can figure these things out together.
Thank you!
To sign up to test JITT you can fill this form
This is a very nuanced issue and that is one reason why we had to write this up almost like an encyclopedia entry. And there is much more we could have written, which is why I think it great that folks like you bring up issues that we can then further unpack in discussion.
As you point out, human to human connection is the core of 7 Cups. That IS what we do. We utilize technology in that process. When we first started 7 Cups it was audio. As a psychologist, I thought okay meeting in person is nerve wracking for a lot of people, so anonymous audio / phone connections will be much easier and less threatening. That was true in that it was less threatening, but nowhere near as less threatening and easy as anonymous messaging. We learned that very early on. So then we built out the technology of anonymous messaging to help people better support one another. Then, in similar manner, we built out groups, forums, growth paths etc - all types of technology to enable us to better help people.
At the core of it, we use technology to help humans connect, heal and grow. Any tech can be used for good or bad. We work hard and put a lot of thought into how we use technology towards those outcomes we are trying to achieve.
And to zoom further back, the question becomes: How do you best help humans connect, heal and grow? That is the fundamental question and our highest ethic - the thing that drives us. The next question is: does X technology help us do that better? If yes, then it makes sense to carefully use it and monitor its impact to make sure it is actually helping us achieve that objective. We believe that careful and limited implementation of AI to support new listeners will help us better achieve that objective.
Your other key point is also a very good one - that the use of this AI should be limited so that it does not make people too dependent. We 100% agree on this. Many of you have probably seen the movie Her. If you haven't, then please watch it. Spoiler alert - humans spend more time talking to AI and less and less time talking to humans. It is a tragic and dystopian end. For this reason, we limit the amount of messages that a person can send to Noni per month. Additionally, and more to your point, we also limit the amount of messages a new listener has with JITT support. We do not want it to become a crutch - instead it has to be training wheels. You use training wheels to learn how to ride the bike so you don't fall as much and then you take them off once you know how to ride. The same principle follows with JITT.
Right now we are limiting new listeners to 90 messages a month. I hope that this is sufficient. If it isn't, then we'll need to bump it up a bit. If it is sufficient, then we can dial it down to the threshold where it is the right amount. At present, we don't know that answer yet, but we should know it - hopefully - in under 2 months.
Thanks again for the thoughtful comments! We'll be sharing access to JITT soon with people that have signed up. Thanks for being willing to check it out and give us critical feedback. Much appreciated!
Thank you so much @GlenM for taking the time to respond to my concerns 💜 I really appreciate it
That is a great step towards making sure listeners don't become overly reliant on this. Obviously the number can be fine-tuned as we go along, but I think it's a great start and it's definitely a step in the right direction.
I think that answers just about all of my concerns for now. I'm really excited to see this feature in action and I'm sure it will be a great addition to the site. And as always, I'd be more than happy to provide more feedback as I get to use it.
Thanks again for taking the time to address my concerns and putting my mind at ease.
I hope you have a great day! 💜
@GlenM
Will there be AI resources for 7Cups.com Members that will help train Members to better
support other members and to train members that want to Host the Sharing Circle ??
@spongbobishappy that is not currently planned, but is an interesting idea. We will be building out a number of training modes where Noni will be able to act like a distressed member suffering with different challenges. The listener will be able to help Noni to practice helping members. The goal is to provide specific feedback like: an open ended question would be good here, emphasizing emotion would be helpful here etc. I don't see why we couldn't also turn on this capability or members as well. How does that sound?
@GlenM
"turn on this capability or members as well" - that is a great idea !! 😊 Thank you.
I'm quite unwell right now, so maybe I missed something. Wanted to pipe in sooner rather than later.
-
Should members have the option of chatting with listeners where they know that the listener cannot and is unable to use just-in-time AI supported listening?
I am not understanding what is being done to give members the option to not have AI used. I think members should have control to be able to turn off this feature at any time. Ideally a little button/toggle that shows at the top of the chat.
- AI guidelines should be added to community guidelines
We already know that some listeners are using AI that isn't Noni. I would like to see clear guidelines for across the site. What I think are reasonable
- 1-1 chats I think Noni only is a reasonable limit, plus allowing discussion of other AI tools (for example if I know a listener uses one I might ask how the AI responds to something. I do not see why we should allow other AI to formulate responses for members or listeners beyond this though. At the very least, permission should be required if we are allowing more than Noni.
= Forums/Articles/Resources - any AI generated information be a post, reply, article should require citing the AI source.
- Group chats - I think it's reasonable to ban any AI generated content in group chats (besides Noni, as it is currently I do not consider this to be AI). If we do allow for a user to include AI generated posts then it should need to be clear it is AI. Mods (unable to mute) should not be allowed to use AI unless there is something specific put into place as a modding tool only.
- Q&As - I would prefer no AI generated content period. If allowing though, again require to credit the AI source.
- No joking/lying about what is AI and what is someone's own words, being a bot. We already have people who are here who are upset about AI in their life and/or confused when they are talking to a real person on 7cups. we should avoid adding to any distress or confusion as much as is realistic.
- (slight tangent as this isn't just AI, but includes it) better plagiarism guideline. Plagiarism does come up on the guidelines document, but in an odd way and doesn't match my understanding of what plagiarism is. I have seen people take information from other places and completely just copy and paste and it appears as if they created it themselves I think we could have a much clearer guideline about how to use things by other people.
I could use a clearer guideline even for my own posts, I often use images for check-ins from image libraries that say crediting is not required. I did credit for a while, but stopped a while ago. There's just a number of steps already between compiling a tag list, noting a check in was made for the team, using the featuring tools that I got lazy with crediting a link for the image source. I still try to do it if using a different type of source although it's possible I simply forgot in the past.
@AffyAvo great points Affy and hope you start to feel better soon. When a member talks with a listener that has JITT, they'll see a pinned message at the top of the chat. In any regular chat, there are 2 pinned messages - one about not sharing contact information and one about 7 cups not being a crisis service and a list of hotlines etc. to get more appropriate crisis help - and there will now be a third that says this listener has access to AI support.
Yes, we will update the guidelines and other elements of the site as well. And kudos to you for being so diligent on citing appropriate sources for images. Not easy :)
@GlenM So we are allowing members to see when in a chat if a listener has the AI option. That's a start. I think it would be much more ideal if a member has control if a listener they are chatting with can us AI or not. I see 2 ways of doing this.
- As mentioned above, having the option to block AI (or allow the option) when in a chat with any listener. This would mean the member's preference overrides the listener's.
- Have it as an option when connecting. This could mean showing on a listener's profile when browsing listeners and also allowing the option of no AI when sending out a general request. This would allow listeners and members to both have choice when it comes to AI, and would limit connects based on this preference.
A downfall I see of just informing members and not giving them the choice beyond choosing to not chat once a M-L connection has already been made is the potential for increased member frustration and then more ghosting or blocking of listeners..
@GlenM
I won't say I read each and every word as I am not good reader. But I pick up what question or doubt i had regarding AI.
I think you and your team had gone thoroughly over most of the pros cons.
Like we can't copy paste it.
Importance of more human connection.
Not compulsion to take it help.
It going to be great help for beginner listener group.
Someone like me where empathic don't come naturally .
Someone coming back from break.
Love that ,before implementing you are open to get feedback.
I appreciate you are putting great effort to make this place better and more n more people feel helped.
It like we are given all material to paint a picture. Now it upto us how wise we make use of it. Let not make feel member talking to robot.
Good luck. God bless you.
@blindHeart12
Hi friend.
I noticed your sentence: "Someone like me where empathic don't come naturally ." Remember that AI don't have a conscience. It can only mirror feelings.
@Helgafy @blindHeart12
These are my understandings and my subsequent opinions about this issue.
I think what Heart meant by difficulty with empathy is difficulty showing empathy.
Often we want to be empathetic but it can be tough to put that empathy into words, especially if we're not so fluent in English.
AI can help us phrase our sentences properly to convey what we're trying to say.
Now this is not to say that AI has feelings or a conscience, or even that AI's portrayal of emotions and a human's actual emotions are the same.
But as AI gets more and more sophisticated and our algorithms and training data improve, we are likely to approach a point where AI and human speech are indistinguishable.
For the sake of simplicity and time, I'll avoid going on a technical tangent here as to why that is. I go into more technical detail in the next post so if you're interested, please read that one.
Even now, existing AI can generate content that, to the untrained eye, is very much the same as human text. And even now, listeners can use it to help them with chats. The only difference is that it's not as easy and they have to do a lot of work (copying, pasting, giving the right prompt, etc.) to get it to work. But regardless, a tool from an external source can still very much be used in a limitless way, seeing as it is nearly impossible to tell with certainty whether a text was generated by AI or not.
I think the benefit of having this new in-house solution would be greater control over the AI and the ability to customize it to our needs. It could be pre-trained on the core listening skills and be trained to provide specific responses or resources in certain situations.
So here I'll leave you with a question:
Do you want a listener who gets no AI help from 7 Cups (you can't know if they use resources from elsewhere) and has broken grammar and follows no 7 Cups guidelines
or
Do you want a listener who gets AI help from 7 Cups (you know exactly what they're using) and has perfect grammar and asks questions like "how does that make you feel?" and "what would make you feel better about this?" (since the AI they use has been trained to ask such questions)?
@Mahad2804
Hi friend.
At the bottom of page 1 you can see my meeting with a L. who was using AI. I asked him about his very fine English several times, but he never said the poems were AI. He was living in India. But so - I found out everything was a big AI-story, and not something from himself. I'm very glad that I was not it the position that I needed a friend. You're very academically so I think I'm unable to discuss your fine text with you - LOL.
Heya @Helgafy
First of all, I'm so sorry you had to go through that horrible experience. It's unfortunate to see that people use this wonderful platform for exploitation as well.
Secondly, I don't think this person is a good example to take for AI use on listening. Sure they were using AI but they were not using it to listen, that AI was not meant to be used for listening, and they were breaking multiple rules with everything they were doing.
It is to avoid issues like that, that I say we should have an in-house AI that the 7 Cups team can control and fine tune so that a) we can provide help to listeners who really need it for listening and b) we can actually build an AI model for the purpose of listening that we can then further expand for use in things like the censor or an auto-reporting system. Basically setting us up for future developments in the field of AI and having the data and algorithms to implement it quickly.
Lastly, I'm sorry if I came off as too academic or unapproachable. I try my best to avoid technical language and phrases when posting here but sometimes they slip their way through.
Regardless though, don't ever hesitate to ask me a question or point out something you think I said wrong. We're all humans (or maybe AI, who knows) and we're always learning so never a bad thing to have something pointed out to you.
Hope this clears up any confusions
Take care and have a great day ❤️
@Mahad2804
Thank you so much for your fine answer.
@Mahad280
You got my point well when I said empathy don't come naturally.
Yes frame of sentence..
the stuck situation when you want to express but lack of word to express nicely and clearly.
Now for the tangent:
The reason why I say that AI text will become indistinguishable from human text is because of the way AI works.
The backbone of AI is a neural network. To put it simply, a neural network is a tool that can learn from data and make predictions based on what it has learned. It works very similarly to the human brain where it has neurons (nodes) that are connected. Each neuron affects every other neuron that it's connected to.
And just like a human learns from experience, a neural network learns from data.
Think about a baby. When a baby is born, their brain is like a blank slate. They don't know anything. But as they grow up, they learn from their experiences.
They learn that when they cry, they get food. They learn that when they touch a hot stove, they get burned. They learn that when they say "mama" or "dada", they get attention.
They learn that when they do something bad, they get punished. They learn that when they do something good, they get rewarded. They learn that when they do something, they get a certain reaction.
And they learn that when they get a certain reaction, they should do something.
Similarly, a neural network starts with no knowledge. Then what we do is we show it some data and ask it to predict what the "outcome" or "answer" should be.
I'll use an example to explain this better. Let's say we want to train a neural network to recognize a cat. We show it a bunch of pictures of cats and a bunch of pictures of non-cats. We ask it to make a prediction on whether the picture is a cat or not.
If it gets it right, we reward it (meaning we reinforce the neurons that it used to make that prediction).
If it gets it wrong, we punish it (meaning we tweak the neurons that it used to make that prediction). We do this over and over again until it gets it right every time.
Then we show it a new picture of a cat and ask it to make a prediction. It should be able to tell us whether it's a cat or not just like a child who has seen cats before should be able to tell us.
Now this is a very simple example. In reality, we use a lot more data and a lot more complex neural networks. But the idea is the same.
So now that you understand how AI works, let's talk about why AI text will become indistinguishable from human text.
This AI, it's learning from human text. And it's learning using centuries worth of human text. So it would make sense that it would be able to generate text that is practically just human text.
Another thing to note is that it's not just about the AI. It's also a matter of the tools we use to detect AI-generated text. Currently, the most renowned and reliable tool I've seen and used is GPTZero.
GPTZero uses a couple of factors to determine whether a text is AI-generated or not. One of them is perplexity. Perplexity is a measure of how random or how diverse a piece of text is. The more diverse, the more likely it is to be AI-generated.
But this can be overcome quite simply, using just the very same AI that we're trying to detect.
I'm not gonna tell you how to do it because I don't want to give anyone any ideas but it's not that hard to figure out.
So even the tools that are supposed to catch AI-generated text can be fooled by those exact same AI models.
I think this is a somewhat scary thought. But I also think that it's inevitable.
If not now, then tomorrow. If not tomorrow, then the day after or the day after that. But eventually, AI will be able to generate text that is indistinguishable from human text.
So what can we do about it? I think the best thing we can do is to embrace it.
I think @GlenM and the team have put a lot of thought into this and I think they've made the right decision. Giving a tool like this to listeners will keep them from relying on external sources and will allow them to focus on the member and the conversation. Sure, it will need to be limited and monitored but I think it's a step in the right direction.
We need to start embracing AI because regardless of whether we like it or not, it's here to stay. And there are gonna be people who use it even if it's not allowed. So we might as well give it to them and make sure they use it responsibly.
This has been a long post and I hope you've made it this far. I'd love to hear your thoughts on this. Please feel free to comment below. I'd love to have a discussion about this.
Thanks for reading and have a great day! 😊
@GlenM
One of the things that prevents me from being a listener is the fear of being stuck for an answer. So something that helps with that is welcome. And it does sound like this could be helpful.
I do have concers about AI in our lives long term. But I also believe it is here to stay and we need to learn to live with it. I appreciate 7 cups' thoughtful approach to the controls needed to ensure it is helpful, not harmful.
@Clio9876 it is pretty easy to get stuck in chats and totally normal, so glad you think this will help. I do too!
I'm also concerned about AI long-term and impact on humanity. That is one reason why I think it is incredibly important that we learn as a community how to integrate AI in a way that is aligned with our values and culture. And there isn't a group of people that I trust more to help figure this out. I don't think it is an option to pretend it isn't happening. I think we have to steer into it with our eyes wide open and find the best path through. And, to me, we know when we are on that path when we are honoring our values and culture and demonstrably helping people better connect, heal, and grow.
https://www.thediplomaticaffairs.com/2023/07/28/the-perilous-consequences-of-losing-our-humanity/
Yeah this isn't creepy at all. Teaching new listeners to be more like artificial intelligence. Just replace all the humans and get it over with. It's what you all want.
@brightskies8321, did you read the post?
@GlenM Sometimes no response is the better option.
Thank you for these guidelines. I am just a click away.Xx
I like to get to know you all.x
@GlenM
Commenting to be updated
All - quick update on this front. We just pushed this live to all new listeners that have been on the platform for less than a month. All new listeners that sign up today moving forward will also have access. We need to track to see how it is used and the optimal time to keep it on. I'll keep you posted on this front as we learn more.
Also, we now have a new badge for this too. Any of you that have requested access will soon or already have the Noni help badge that enables access to this capability.
Please let us know what you think after testing it out. All feedback is very welcome. Here are some initial questions / thoughts to consider:
1. How challenging is listening without JITT on a 1-10? How challenging is it with JITT?
2. Any concerns with JITT? If yes, how would you recommend we address or fix them?
3. Any other thoughts, reactions, or ideas?
Heya @GlenM
Thanks for letting us know!
I just wanted to ask if you could tell us the name of this badge so that everyone is aware and there's no confusion regarding it.
Super excited to test it out soon!
@GlenM
Members should have toggles to manage their choices in their settings page:
JITT (add quick explanation of what it is)
@Myosotis17 these are great points. We will get there on many of these. They just take time to build as settings that bridge listener to member behavior are very complicated b/c so many different relationships can exist. That is why we are doing the pinned message at the top of all member chats that are connected to listeners that have JITT available even if they are not using it.
@GlenM thanks for this update
Just tagging our JITT testers here:
@wildflower999 @HealingTalk @lyricalAngel70 @SuryanshSingh @BunnygirlAnna @fristo @YourCaringConfidant @azuladragon34 @Mahad2804 @KindWolf2023 @ComfortableSmiles97 @Jyne @KateDoskocilova @GoldenRuleJG @grimsmark8 @FrenchMarbles @Greatlifetoyou @CompassionateConnections @Lunaire00 @Here4you224 @Friendlyanju777
@GlenM Right quick, I just wanted to express the ease of using this feature. I just tested it out with my member account first (wanted to see what a member would see on their end before trying it out with an actual member needing support). I have to say I really like how when you click the light bulb a suggestion for a response pops up that we can either use or dismiss if not. I was thinking a message would pop up on the member side notifying that a message was not "authentically" that of the listeners but that wasn't the case. There is a message at the top chat awaring the user that the listener has access to the training program but did not specifically notifying if the hint suggestion was being used. I would like to ask as a listener, if it is frowned or encouraged to bring to the attention of the member that we used the suggestion and that it did not fully come from us. I know we have to word it to suit us but still, part of me would want to fully disclose this especially to the ones who are against AI. I like to think I have my own way about me so I just wanted to try it out of my account first to get an idea but I'm not sure if I am really using this feature with a real person that if I should let them know of if I should just hush. Please let me know what you think. Thanks.