2023 has started with some really important conversations about artificial intelligence and GPT-3. Those of us who study and work in this space are passionate about harnessing artificial intelligence as a source for good while also being sensitive to the potential risks it surfaces. It is our life’s work - and work that requires continued learning, growth, questioning and adaptation.

I welcome this learning and this robust discussion. In fact, these questions have been at the core of our work at Koko. Our research has explored how to increase effectiveness of online crisis referrals and to increase utilization of crisis resources such as The National Suicide Prevention Lifeline though machine learning-based crisis triage. And throughout our work, we collaborate with trusted experts in this burgeoning field and work hand-in-hand with our clinical advisory team.

Koko offers multiple interventions, but our peer support service is the tender, beating-heart of our platform. It’s based on the simple idea of helping others to help yourself.

We are also learning so much about how peer supporters can be most effective. Much like social media platforms that give you a starter message to tailor to wish a friend “happy birthday,” we saw that using GPT-3 to provide a response option that a human could adapt improved response ratings. Every time a human peer supporter used an assist from GPT-3, the message receiver saw that Koko Bot helped (see below)!

https://lh3.googleusercontent.com/KhocFs_Ey4Y_yvY22v3m3c4-KZUx_mECKq5OLMjX_xJQxJ5NvO641ZpNDFlTV7b5r048k4OxX9PM-G1pR2AGWk-bicueeqAiwDqWWJN20Qg8O6vanBEoln2q13jrpQvYlfnmchj_ANx-eustCb2QDOgCvKGZ51cxCqZW7AxoO7PCVfvtOk7KcIOCff8LWQ

As receivers of messages across social media posts, we know when posts are purely machine-assisted. That fourth “congratulations on your work anniversary” message on LinkedIn rings less true than the one that is clearly written by someone who knows you and adapts their message to your specific skills and achievements.

That’s why I recently shared my belief that GPT-3 can be helpful, but it alone lacks elements of what humans need and crave in online interactions designed to improve well-being. Humans exhibit an empathy and specificity that machines can’t mimic.

And that’s why I receive critiques, concerns and questions about this work with empathy and openness. We share an interest in making sure that any uses of AI are handled delicately, with deep concern for privacy, transparency, and risk mitigation. Our clinical advisory board is meeting to discuss guidelines for future work, specifically regarding IRB approval.

In the end, our mission is a human one - to create tools that support well being and contribute to positive mental health outcomes. Our methods to get there will continue to undergo rigorous testing, questioning, and adaptation and we welcome constructive feedback, questions, and comments that will help us achieve this mission.

So as we move through 2023 and beyond – let’s work together to ensure we can harness the benefits of AI, while protecting those who are most vulnerable. Thank you for your feedback, your partnership and collaboration to support better mental health outcomes for those in need.

-Rob, Cofounder of Koko