Select Page
This article a 15 minute extended read.

Advancements in technology in the form of Artificial Intelligence, robotics, and virtual/augmented reality have moved from traditional practical applications of providing products or facilitating processes to more sophisticated human interaction in health, education, training and now to therapy and coaching

 

Virtual and Augmented Reality

Will “live” coaches and therapists be replaced by online avatars using interactive technology? Will creative inventors recognize the preference of younger people to use their smartphones and tablets for all of their social interactions? There are clear signs we are already moving in that direction.

Due to the enormous diffusion of the Internet, telepsychology and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds.

Now, the use of emails, Podcasting, chat rooms, videoconferencing and texting are being adapted for therapeutic use. In addition, through developments in gaming technology we could see avatar therapy and coaching allowing clients to experience behavorial intervention in a therapeutic environment, and others seeking goal achievement, greater self-awareness and relationship improvement experience a coaching relationship. Some people do feel more comfortable in a virtual environment rather than in a real environment.

Max Celko, a researcher at the Hybrid Reality Institute, describes how these mental health and self-improvement services are increasingly accessible via mobile apps. The newest of these apps integrates Artificial Intelligence capabilities similar to apple’s virtual assistant, SIRI. Celko argues “these intelligent systems will make our devices come to life taking on new functions as our personal virtual psychotherapist or life coach.”

Celko says that with further advances in technology, Artificial Intelligence it may become possible to have AI systems possess emotion sensing capabilities, enabling them to detect user’s emotions and intents. “Interacting with ‘humanized technology’ in the context of therapy and coaching will turn our devices into ‘identity accessories’: They will become tools to actively sculpt our behaviors and identify,” Celko contends.

An example is Mindbloom, which is designed on a gaming platform. Users can connect with each other to modify behaviors, attain goals and be more successful., and users are also able to send motivational messages and compare their progress. In essence, Mindfbloom “crowdsources” life coaching services from one’s group of friends.

 

 

The marketplace is already infused with businesses aimed at taking advantage of this virtual reality trend:

  • Virtual Therapy Connect provides utilizing our own proprietary HIPAA compliant web-based communications platform that enables therapists to connect from their Virtual Therapy Connect Online Office with clients for secure video therapy sessions and secure real-time online text chat sessions.
  • Talk To An Expert which provides therapist, counselor, coach, consultant, trainer, services online.
  • The Online Therapy Institute which offers training and consultancy to mental health practitioners, coaches and organizations worldwide who have an interest in using technology to deliver services.
  • The Virtual Reality Medical Center which uses virtual reality exposure therapy (3-dimensional computer simulation) in combination with physiological monitoring and feedback to treat panic and anxiety disorders; the Virtually Better Clinic in Atlanta providing an after care option of treatment using Second Life, the popular virtual world program.
  • Mobiliyze, a digital device app developed by Northwestern’s School of Medicine, which monitors a person’s location and social interactions and alerts them of signs of depression;
  • MoodKit, an app developed by clinical psychologist, helps the user identify and change unhealthy thoughts and chart the user’s state of mind.
  • SimSensei, an online virtual therapist developed by the University of Southern California’s Institute for Creative Technologies, has an animated avatar which asks the user questions while analyzing non-verbal cues such as body language and facial expressions to help diagnose anxiety.
  • A virtual coach “Shelley,” developed by Healthwise, a non-profit that designs corporate patient education materials, uses conversations to help the user make decisions and change behavior.
  • Skip Rizzo and his colleagues at the University of Southern California’s Clinical VR Research Group  have projects using virtual reality aimed at psychological disorders, PTSD, pain, cognitive assessment, rehabilitation and virtual patient clinical training; and
  • Virtual Life Coach, which provides coaching in a virtual environment.

 

Research

 

Cristina Botell and her colleagues at the Universitat de Valencia in Spain published a research study in the journal Clinical Psychology and Neuroscience  examining the efficacy of using virtual reality in psychotherapy. They concluded: “Compared to the ‘traditional’ treatments, VR has many advantages (eg., it is a protected environment for the patient, he/she can re-experience many times the feared situation, etc.).”.

Alessandra Gorini and her colleagues published a research report in the Journal of Medial Internet Researchon the use of virtual reality in therapy. They concluded: “We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients.”

In a study by Youjeong Kim and S. Shyam Sundar and published in the journal Computers and Human Behavior, the authors argued that “user-created self-reflecting avatars made salient different mental images of their bodies based on whether they customized their avatars to look like their actual or ideal selves, and consequently influenced their perceptions toward their physical body” with positive consequences participants health outcomes.”

A study by the Center for Connected Health found overweight participants who used an animated, virtual coach lost significantly more weight than participants who had no virtual coach. And according to a study published in the Journal of Medical Internet Research, those using a virtual coach in an exercise regime over a 12 week period maintained their exercise regimen, while those without a virtual coach saw their exercise levels drop over 14%. Co-author of the study, Timothy Bickmore, argues virtual coaches have a role in health and wellness.

 

 

A study by clinical researchers at the University of Zurichlooked at whether online psychology and conventional face-to-face therapy are equally effective. The results for online therapy even exceeded their expectations. They concluded: “in the medium term, online psychotherapy even yields better results. Our study is evidence that psychotherapeutic services on the internet are an effective supplement to conventional therapeutic care.”

A study by the American Journal of Preventive Medicine found that real change in patients came from collaborative discussion or “motivational interviewing.” Instead of the a therapist diagnosing and telling the patient what medication or treatment to take the therapist asks what changes and goals the patient is willing to make. This may account for the growing popularity of coaching, where the focus is predominantly on the client taking responsibility for action.

 

Chatbots, Companion/Social Bots and Robots

 Artificial Human Companions or Companion Bots or Social Bots

 

These may be any kind of hardware or software creation designed to give companionship to a person. These can include digital pets  such as the popular Tamagotchi or robots  such as the Sony AIBO. Social Companions can be used as a form of entertainment  or they can be medical or functional, to assist the elderly in maintaining an acceptable standard of life. Social robotsare an artificially intelligent system that has a physical embodiment, is autonomous, and interacts and communicates with humans, social robots  TicoJibo, and iCub are already moving have moved beyond the experimental stage and are interacting with people in real-life.   Some social robots such as Hitchbot have even acquired media star status.

Examples of Social Bots

Examples of social robots are the Care-o-bot, telepresence robots Giraff and  VGo, and companion robots such as  AiboYumelPLEO, and Huggable are already being used in many facilities for the elderly, more rigorous research is still needed to measure their effectiveness in health care. For individuals with autism, research has already shown that they can be even more responsive to treatment using social robots than with human therapists due to their difficulty with social cues. Examples include the AuRoRa (Autonomous Robot as a Remedial Tool for Autistic Children) Project and the the IROMEC (Interactive Robotic Social Mediators as Companions) Project.

Wired Magazineran an extended article describing what is known as “Socialbots,” or advanced form of a chatbot, designed to offer help, support and companionship to people. Here’s a brief summary of the description.

 

  • Researchers at Japan’s AIST developed PARO, which comes in the form of a cute baby white seal, for patients at hospitals and extended care facilities who could benefit from animal assisted therapy but for whatever reason, such as community rules that bar actual pets, can’t have an animal. The interactive robot has five types of sensors that can detect the environment around it; the device also remembers how people interact with it—if you repeatedly pet PARO in a certain spot, it will remember the spot and react favorably to your touch. If you hit it because it did something you didn’t like, PARO will remember not to do that action again. PARO recently had a starring role in the Netflix TV show Master of None. The robot was introduced at the beginning of “Old People” as Arnold’s (Eric Wareheim) grandfather’s robotic pet.
  • JOY FOR ALL COMPANION PETS.Hasbro has developed a new toy line, called Joy For All Companion Pets, for senior citizens in need of companionship who aren’t able to care for a real animal. These robotic cats look, feel, and act pretty much like the real thing—they don’t walk, but thanks to built-in sensors, they purr and nuzzle when touched; they also meow, sleep, and roll over for belly rubs.
  • PHOBOT. Student researchers at the University of Amsterdam developed Phobot, an interactive robot that serves as a strong visual and learning aid to help children who suffer from anxiety and phobias. It was built using various LEGO Mindstorms NXT kits and a number of RFID sensors. When Phobot is confronted by larger objects, it’s programmed to react with fear: It raises its eyebrows, turns around, and zips away in a panic. When it’s confronted with smaller objects, however, it can be coached not to be afraid. Then, when a larger object confronts Phobot again, it can be coached not to react with fear. Researchers believe robots like Phobot can teach children how to deal with and ultimately overcome their phobias and anxieties. “This robot is there as a sort of buddy to help a child having any kind of actual fear, doing it step by step,” team member Ork de Rooij  Phobot was built for the University of Amsterdam’s 2008 Human-Robot Interaction Student Design Competition, where it was voted the conference’s favorite robot.
  • OLLIE THE BABY OTTER.Studies have shown that our relationships with animals can create feelings of safety and security; being around domesticated animals like dogs and cats can have a positive affect on a patient’s social, emotional, or cognitive well-being. Ollie the Baby Otter was specifically built for Animal Assisted Therapy, which, as the name implies, relies on animals to help people suffering from things like cancer, dementia, or post-traumatic stress; scientists hope that allowing a patient to cuddle Ollie during therapy will help him or her through the healing process.  In 2013, a class of MIT students in a course on Product Engineering Processes built Ollie for about $500, using a Raspberry Pi (a cheap and powerful computer) for its brain. Thanks to a sensor board and custom motor, it can also understand how someone is interacting with it through touch and respond favorably with movement and sound: The bot hugs a patient’s hand and purrs when its belly is rubbed. Its users are encouraged to gently hold and cradle Ollie like an infant, but the robot is durable and waterproof.
  • KEEPON PRO.BeatBots—a robot design studio based in San Francisco and Sendai, Japan—created Keepon Pro in 2003 specifically for children with developmental disorders like autism. People with autism often have trouble keeping eye contact with other people, so a therapist can use Keepon to interact with a child in a social setting without the child shutting down. Keepon’s eyes are two small cameras, and its nose is a microphone, which feed information to the therapist in another room. The bot is equipped with four motors, which the therapist can control remotely. Keepon also features facial recognition software that can detect eye contact and movement. The robot is also a pretty good dancer; the professional version of the robot has been featured in several music videos, and a mainstream version for kids, called MyKeepon, was also developed.
  • NECORO.In 2001, Japanese toy manufacturer Omron developed and designed NeCoRo, one of the first robotic lap cats made for seniors in the country. While it couldn’t walk or perform tricks, the cat contentedly purred when stroked and gave positive or negative emotional feedback, depending on the user’s actions. If a user neglected NeCoRo, for example, the robot would be less affectionate the next time the user interacted with it.
  • In 2014, French robotics company Aldebaran invented Pepper, a social humanoid robot that was designed to live with in a person’s home. It interacted with its owner by using voice and touch, and was designed to understand human emotions: For example, if its owner laughs, the robot will understand that the person is happy; if a user frowns, the robot will know something is wrong. Pepper analyzes a user’s body language, facial expressions, and analyzes a user’s words to properly guess his or her mood. The robot is equipped with 3D cameras, an ultrasound system, and tactile sensors to explore the world around it and feel its owner’s touch. It can even connect to the Internet to expand and broaden its knowledge. Currently, Pepper is used to greet and interact with customers at SoftBank Mobile stores in Japan. The SoftBank Group is Aldebaran’s parent company.
  • DREAM PET SERIES.In 2007, Sega worked with scientists and researchers from Tohoku University’s Institute of Development, Aging and Cancer in Northern Japan to develop and design the Dream Pet Series. While Sega is mostly known as a video game developer and publisher, they started to manufacture electronic toys in 2002, after the failure of the Sega DreamCast ended their run as a gaming console giant. Sega wanted to make its robotic household pets more realistic and highly therapeutic for patients and the elderly, who use the mechanical animals for relaxation and to ease tension. Sega’s Dream Pet Series includes chicks, an owl, a kitten, a parrot, and a dog, along with two cats.
  • POPCHILLA.The Spark Fund for Early Learning at The Sprout Fund in Pittsburgh helped local company Interbots develop Popchilla, a “puppeteerable robot” with a companion iPad app. The goal of the robot was to help children with autism learn to identify emotions, and, in turn, teach them to respond to social cues. “By using Popchilla as an intermediary, we hope to increase the understanding of the child’s internal feelings, thus reducing behavioral frustrations,”Cindy Waeltermann, the Founder and Director of the Autism Centers of Pittsburgh, said. “If they are able to identify that they are ‘angry’ and what ‘angry’ means, it can significantly help them understand what they are feeling, reducing behavioral ramifications.”
  • THE HUG.The Hug is a soft and robotic pillow, or CareBot, that uses sensing and wireless phone technology to give an enhanced physical sensation and touch during a phone call. The pillow gave its users a stronger social and emotional connection to the person on the other line. Researchers at Carnegie Mellon in Pittsburgh discovered elderly people need the most emotional support, so The Hug was designed with the sole purpose to deliver tactile and physical responses through voice recognition software and a small microphone built inside of its cushion. Sadly, The Hug is not available for purchase. It was part of an academic research initiative to link robotics technology to intimate communication.

 

 

Chatbots

A chatbot (also known as a talkbot, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity) is a computer program  or an artificial intelligence  which conducts a conversation  via auditory or textual method. Such programs are often designed to convincingly simulate how a human would behave as a conversational partner, thereby passing the Turing test. Chatbots are typically used in dialog systems for various practical purposes including customer service or information acquisition. Some chatterbots use sophisticated natural language processing systems, but many simpler systems scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database.

Tess, the mental health chatbot that thinks like a therapist

Tess is a mental health chatbot. If you’re experiencing a panic attack in the middle of the day or want to vent or need to talk things out before going to sleep, you can connect with her through an instant-messaging app, such as Facebook Messenger (or, if you don’t have an internet connection, just text a phone number), and Tess will reply immediately. She’s the brainchild of Michiel Rauws, the founder of X2 AI, an artificial-intelligence startup in Silicon Valley. The company’s mission is to use AI to provide affordable and on-demand mental health support.

Saint Elizabeth Health Care, a Canadian non-profit that primarily delivers health care to people in their own homes, Saint Elizabeth recently approved Tess as a part of its caregiver in the workplace program and will be offering the chatbot as a free service for staffers. This is the first Canadian health care organization to partner with Tess and the first time that Tess is being trained to work with caregivers specifically to provide caregivers with appropriate coping mechanisms. Tess first needed to learn about their emotional needs. In her month-long pilot with the facility, she exchanged over 12,000 text messages with 34 Saint Elizabeth employees. The personal support workers, nurses and therapists that helped train Tess would talk to her about what their week was like, if they lost a patient, what kind of things were troubling them at home – things you might tell your therapist. If Tess gave them a response that wasn’t helpful, they would tell her, and she would remember her mistake. Then her algorithm would correct itself to provide a better reply for next time.

One of the things that makes Tess different from many other chatbots is that she doesn’t use pre-selected responses. From the moment you start talking, she’s analyzing you, and her system is designed to react to shifting information.  Tess’s great value is accessibility. Many caregivers found Tess convenient to talk with because she could be reached at any time – something they don’t have a lot of.

While she is trained to act like a therapist, Tess is not a substitute for a real one. She’s more of a partner. If, when chatting with her, she senses that your situation has become more critical – through trigger words or language that she has been programmed to look for – she will connect you with a human therapist. In other cases, she might provide you with the resources to find one. That said, many caregivers who chatted with Tess said they felt more comfortable opening up to her precisely because they knew she was a robot and thus would not judge them.

Robots

A robot is a machine —especially one programmable by a computer — capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda‘s Advanced Step in Innovative Mobility (ASIMO) and TOSY‘s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robotsUAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.

A new term, robotherapy, describes the different ways that social robots are used to help people. This includes specialized robots for helping children, adults, or the elderly with cognitive, social, or physical problems. The idea of using robots in therapy is to help people by taking over many of the tasks for which they would ordinarily need human assistance. This allows them to be more independent and stay out of total care institutions for as long as possible.

A new article published in Review of General Psychology by Cristina Costecu and David O. David of Romania’s Babes-Bolyai University and Bram Vanderborgt of Vrije Universiteit in Belgium, provides an overview of some of the latest advances in robotherapy. Costecu, David and Vanderborgt reviewed over a dozen studies conducted over the past twenty years.  By comparing these  studies, the authors examined how useful social robots were in treatment.  As well, they looked at how effective robotherapy has been in in helping patients develop better cognitive skills, learn to control their behavior, and cope with emotional problems.

The results of their research suggest robotherapy is effective for specific kinds of populations, such as people with autism. Not only can robot-enhanced therapy ease the workload of human therapists, they can also lower the cost of treatment and help patients who have greater difficulty dealing with humans in social settings.   Robots can be used in a variety of different ways while their value in providing direct feedback to patients and interacting with them on a regular basis helps improve the overall therapy process.

 

Using Artificial Intelligence, Chatbots, and Social Bots, in Psychological Therapy and Coaching

Brian Scassellati,PhD, a social robotics researcher at Yale University says, “Socially assistive robots don’t offer physical support, but rather cognitive or social support,” he says. “Anytime you could use a good personal coach or trainer, we’re starting to see robots involved in that kind of application.”

Over the past few years, virtual help agents have taken on surprisingly sensitive jobs in modern society: counseling Syrian refugees fleeing civil war, creating quiet spaces of contemplation for millions of Chinese living in densely populated cities, and helping Australians access national disability benefits. Bots have offered help, support, and companionship. But there’s one line none of them have yet crossed: actually treating patients.

Butterfly.ai, along with VoiceVibes, Orai and GiantOtter, are among a host of new AI-driven coaching apps that promise to change the way companies provide mentoring and suggest training to improve soft skills. These apps analyze employee survey data, listen to voice cues and evaluate historic performance reports to identify the unique coaching needs of individual users, as well as offer advice and suggest training to address their shortcomings.

That’s just changed, with the release of a talk therapy chatbot Woebot. Created by a team of Stanford psychologists and AI experts, Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health.

Finding the time and money to pay for talk therapy sessions is out of reach for many, so a chatbot could be a helpful stopgap for psychiatry. But Woebot’s creators believe it has the potential to actually improve on human therapists. “It’s almost borderline illegal to say this in my profession, but there’s a lot of noise in human relationships,” says Alison Darcy, one of the psychologists behind Woebot, and the company’s CEO. “Noise is the fear of being judged. That’s what stigma really is.”

Woebot is obviously not a licensed physician, and it doesn’t make diagnoses or write scrips. It’s not equipped to deal with real mental health crises either. When it senses someone is in trouble it suggests they seek help in the real world and provides text and hotline resources.

 

But Darcy says her data supports the claim that chatting to Woebot is in fact a therapeutic experience. Yesterday, Darcy and a team of co-authors at Stanford published a peer-reviewed study in the Journal of Medical Internet Research, Mental Healththat randomized 70 college students and asked them to engage with Woebot or a self-help e-book for two weeks. The students who used Woebot self-reported a significant reduction in their symptoms of depression and anxiety.

Being the only therapy chatbot with peer-reviewed clinical data to back it up separates Woebot from the pack. But using those results to claim it can significantly reduce depression may expose Woebot to legal liabilities that bots in supporting roles have managed to avoid. Without moral agency, autonomous code can’t be found guilty of any criminal acts. But if it causes harm, it could be subject to civil laws governing product liability. Most manufacturers deal with those risks by putting labels on their products warning of possible hazards; Woebot has a somewhat synonymous disclaimer that states people shouldn’t use it as a replacement for getting help.

There’s one other big issue with Woebot in its current incarnation: It only talks to you through Facebook Messenger. Facebook’s services aren’t HIPAA-compliant, but in this case that wouldn’t matter anyway. Because Woebot isn’t a licensed medical provider, any conversations with it aren’t protected by medical data privacy and security law in the first place. While Darcy’s team has built a wall on their end to keep all of Woebot’s users anonymous, Facebook knows exactly who you are. And Facebook, not you or Woebot, owns all your conversations.

Of course, promising real medical results from a chatbot introduces new legal and ethical issues. While Woebot might seem like a person, it clearly tells patient that it’s actually closer to a “choose your own adventure self-help book.” Rather than running on machine learning technologies that would allow it to improvise on the fly, Woebot is much more deterministic. As it gathers mood data and processes any texts and emojis that a patient might enter, the bot traces the branches of a decision tree to offer personal responses and follow-ups for 10 minutes. Mostly, it asks questions. Such as: “What is your energy like today?” “How are you feeling?” “What’s going on in your world right now?”

Those prompts are modeled on today’s most popular form of talk therapy—cognitive behavioral therapy—which asks people to recast their negative thoughts in a more objective light. Patients are encouraged to talk about their emotional responses to life events, and then stop to identify the psychological traps that cause their stress, anxiety, and depression. “A good Cognitive Behavioral Therapy (CBT) therapist should facilitate someone else’s process, not become a part of it,” says Darcy.

 

Recognizing the value—both therapeutic and monetary—some mental health care startups are incorporating texting into treatment. One, called Therachat, sells a customizable chatbot that therapists can use to keep their patients engaged. It gives the doctor a full record of the chats, along with an analysis of frequently used positive and negative words.

Ellie: the robot therapist treating soldiers with PTSD

It’s not an unfounded concept. In 2014, Darpa funded a study of a virtual therapist named Ellie, an embodied avatar developed at the University of California’s Institute for Creative Technologies.  Ellie was a research project, not a commercially available product, but it did provide some of the strongest proof that AI can actually make great therapists. And there’s evidence that removing the “talk” from talk therapy seems to help, too. Scientists who recently looked at text-chat as a supplement to videoconferencing therapy sessions observed that the texting option actually reduced interpersonal anxiety, allowing patients to more fully disclose and discuss issues shrouded in shame, guilt, and embarrassment. is a virtual therapist, designed to detect signs of depression and post-traumatic stress disorder in patients by tracking and responding to visual and verbal cues.

Co-creator, Professor Louis-Philippe Morency, hopes Ellie will be useful in helping patients be more truthful in therapy, allowing them to be treated more successfully. “One advantage of using Ellie to gather behavior evidences is that people seem to open up quite easily to Ellie, given that she is a computer and is not designed to judge the person”, Morency explains. “As the participant is talking with Ellie, we analyze the facial expressions, head gestures, eye gaze direction and voice quality to identify behavioral indicators related to depression and post trauma stress. “These indicators are contextualized by the questions asked by Ellie, such as whether the previous question was intimate or not?” Ellie may be adept at listening and responding, but she doesn’t offer any treatment.

Morency stresses she is not a substitute for a human therapist. Rather, she is used in tandem with a doctor as a data-gatherer, able to break down walls which may exist due to a patient’s unwillingness to disclose sensitive information to a human.

“Ellie’s appearance and gestures were carefully studied”, Morency explains. “We recorded many hours of human clinicians during interviews to identify the key visual gestures. Ellie’s physical appearance was studied as part of a previous project called SimCoach, where many prototypes were compared.”

In this day and age, offering up personal information to a computer may seem fraught with potential privacy issues. Morency stresses this is not a problem, likening her to any other computerized tool used as part of a doctor’s assessment of a patient. “In the same way that the doctor will ask the patient to perform some blood test to better understand a potential illness, Ellie can be used as an information-gathering tool for behavioral indicators”, Morency explains. “In this context, the results of the interaction between Ellie and the patient are only available to the doctor.”

 

There’s another problem with conventional therapy: Sometimes patients enter therapy but don’t stick with it, or don’t have the motivation to change their behavior or environment. It’s hard to identify who will or won’t follow through, but a team of Penn State engineers is working on ways to use machine learning to come up with customized mental and physical health plans that help patients stay motivated. It’s based on a gaming technique: Users are encouraged to move through virtual environments and perform certain tasks. As they do, scenarios get progressively harder and users have to exert more energy and greater motivation. The patient’s performance results could help researchers measure their personal level of motivation, and tailor mental health treatment accordingly to keep them interested and committed.

One of the most difficult parts of treating people suffering from mental health problems might be trying to identify those who are at the highest risk of self-harm. This is really hard to do. Suicidal thoughts are not usually rooted in a single, isolated incident such as a relationship breakup, job loss, or death of a close friend. This unpredictability is a problem for clinicians, but scientists are looking at how machine learning might be able to help. By examining huge quantities of data and pulling out patterns that humans might miss, robots could help spot potentially suicidal patients.

In one study, an algorithm predicted suicide attempts with surprising accuracy. The research particularly focused on improving clinicians’ predictive abilities about suicide. It found that today’s modern clinicians are no more able to definitively find the factors that lead to suicide than mental health specialists of 50 years ago. Machine learning could be the missing link that leads to major advancements in reducing self-harm through prediction and prevention.

All of this said, while robots are already supplementing the work therapists do, they can’t create genuine connections with clients, the kinds of connections needed to really help patients thrive. For now, that’s something only humans can do.

The Growth of Apps for Coaching and Counselling

Using smartphone Apps, a flurry of coaching and cousnelling Apps have been developed in recent years.

The mobile App Mindbloom, for example, is a social gaming platform that enables users to motivate each other to improve their behavior, reach their life goals and generally be more successful in life.  Users can send each other inspirational messages, track and compare their progress, and congratulate each other to their achievements such as a pay raise, a new workout achievement, or a new romantic relationship. “The most effective way to succeed in improving one’s life is through social support,” says Chris Hewett, founder of Mindbloom. “To make these social interactions more fun, we designed Mindbloom to feel like a social game.” The Mindbloom platform hence ‘crowdsources’ life coaching services from one’s group of friends, so to speak.

Researchers have also been developing ‘therapy’ programs for mobile phones to help users deal with anxiety and depression. As the New York Times reported, apps like Sosh are designed to help children and young adults improve their social skills. Primarily targeting individuals with Asperger’s Syndrome, the app features exercises that help users manage their behavior, understand feelings and connect with others.

Mobile-based AI systems could even be integrated with devices worn directly on the body – similar to Nike’s Fuel Band– measuring our activity levels and our biofeedback information. Based on our data, theses systems could work with us to correct bad habits, provide personal development advice, and generally help us to improve our lives. “In the future it might become common have one or several virtual companions ‘living’ on our cell phones. “We are at the beginning of a new era of intelligent interfaces”, says Vlad Sejnoha, CTO of Nuance, the speech recognition company that licensed its voice engine to Apple. “Users really resonate with Siri – they clearly have a real emotional connection with human-like conversational device interfaces. We believe that the bulk of mobile devices going forward will be voice-enabled.”

 

The Downside of Using Chatbots for mental health therapy

John Torous, chair of the American Psychiatric Association’s smartphone app evaluation work group, warned that mental health chatbots present privacy issues.They are not currently covered by the US Health Insurance and Portability and Accountability Act (HIPAA) which stops healthcare providers and hospitals sharing sensitive patient data, he told the Washington Post.

The liability issue is related to a broader concern that the self-help industry including coaches and tools in general as opposed to regulated clinicians such as psychiatrists, psychologists and counsellors, are not regulated by law or regulations.

As digital services get adopted in the healthcare realm, it will thus become increasingly important to set clear boundaries regarding which information third parties are allowed to have access to. Connected to the issue of corporate data mining is the danger of data theft. Since virtual psychotherapists and coaches have such a wealth of information about us, they would be a profitable target for hackers: if one could hack into their memory, then one would receive far-ranging insight into the life and psyche If doctors start using therapy apps as part of medical treatment, it will also raise new questions regarding their accreditation. It might become necessary to create new official certifications for virtual therapy services to make sure they meet certain quality requirements.

 

 

Another issue is determining what physical form the robot should take. Is a fluffy seal the best choice? A cartoonish dragon? A humanoid with a friendly face? “There’s not one form that’s right for everything we want to do. A robot that helps a child learn social skills will probably look different from one that helps a 40-year-old quit smoking,” Scassellati says.

According to new research by OnePoll on behalf of UK firm SecurEnvoy, 77 percent of those aged between 18 and 24 say they feel anxious if they become separated from their mobile phone. If our devices become ‘alive’ via AI technology, acting as a mediator for self-discovery and self-realization, it might have far-reaching consequences for how we relate to our devices.

Unfortunately there has been little or no discussion of exactly what these ways might be. For the caring robots now being developed by the private sector there is no guidance whatsoever on these issues. We can therefore expect at best, the manipulation of emotions in order to maximize profits. At the worst we can expect dangerous mistakes and disreputable deceit.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

“Robot rights” is the concept that people should have moral obligations towards their machines, similar to human rights or animal rights. It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society. These could include the right to life and liberty, freedom of thought and expression and equality before the law.The issue has been considered by the Institute for the Future and by the U.K. Department of Trade and Industry. Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.

The way technologies are designed can solve orcreate new problems. For example, by making robots look like humans or cute animals, we may develop emotional affinity toward the machines. This could help promote trust with users—but perhaps also overtrust? Could we become co-dependent and be overattached to robots, causing a problem when they’re not around?

 

Summary:

There can be no doubt that the application of AI and the form it takes such as Avatars, Chatbots smartphone Apps or robots is rapidly being developed and used in the areas of coaching and therapy. The question is not if these developments will occur but when, and with theses developments a host of issues from privacy to effectiveness to the impact on jobs.

Copyright: Neither this article or a portion thereof may be reproduced in any print or media format without the express permission of the author.

Read my latest book:Eye of the Storm: How Mindful Leaders Can Transform Chaotic Workplaces, available in paperback and Kindle on Amazon and Barnes & Noble in the U.S., Canada, Europe and Australia and Asia.