If you've called the DMV, your cellphone service or cable tv provider, the complaint line of an on-line vendor, or other automated, disembodied telephone "presence," you know the drill:
- You are asked a question.
- You respond.
- You hear funny "sorting" noises.
- You are asked another question.
- Rinse and repeat. And repeat. And repeat.
Now imagine doing this when you call your doctor's office or local medical clinic.
Are there going to be problems with AI-assisted conversational agents (CA) in the health care setting? Oh, yeah. The authors of this paper list "considerations," and it's an impressive list:
- Patient Safety
- Who monitors the interactions between patients and CAs? Does monitoring occur 24 hours/day and 7 days/week or on another schedule?
- Is there a rigorously tested escalation pathway to a human clinician? What scenarios have been configured to initiate the escalation pathway?
- How well do CAs detect subtleties of language, tone, and context that may signal a risk for patient harm?
- Scope
- What kinds of clinical tasks should be augmented or automated by CAs and which should not? How much guidance is appropriate for CAs to provide to patients?
- Trust and Transparency
- Do clinicians trust CAs? Do patients? Should they?
- To what degree do clinicians and patients need to understand the workings of CAs to use them effectively, intelligently, and ensure the appropriate amount of trust?
- Content Decisions
- What are the content sources for CAs that provide recommendations or guidance?
- Do the CArecommendations align with content sources and with supervising clinician recommendations?
- Data Use, Privacy, and Integration
- Who can access exchanges between patients and CAs?
- Who owns or controls the data?
- Will the data be stored or purged?
- If stored, for what purposes (eg, research, commercial use)?
- Are conversations integrated into patients’ electronic health records (EHRs) or do they remain in each device?
- Can EHR data be integrated into CAs to better contextualize interactions?
- Bias and Health Equity
- Which patient groups are used to train algorithms?
- How representative are they?
- How do CAs evolve over time to reflect new user populations?
- How do CAs handle accents and speakers of other languages?
- What about various health literacy levels and compliance with the Americans with Disabilities Act?
- Third-Party Involvement
- CAs should be protected against commercially motivated data sharing or marketing, while permitting referencing of evidence-based products and therapies.
- A balance is needed among commercial, technology leadership, and other incentives for CA developers and health care organizations
- Cybersecurity
- What if data, devices, or apps are hacked or monitored covertly and cause harm?
- Will CA conversation data be encrypted?
- Are there restrictions on CA access?
- Is 2-factor authentication required?
- What are the trade-offs between sufficient security and convenient access?
- Legal and Licensing
- Who is accountable if CAs fail? The sponsoring health care organizations or clinicians? The CA vendors? All of the above?
- What is the role of insurance in CA services?
- Will there be required licenses or credentials for CAs similar to those required for clinicians?
- Research and Development Questions
- What approach or tone works best for patients? Human vs robotic, empathetic vs stoic, terse vs engaging, female vs male vs gender-neutral?
- What are the most common questions or needs posed to CAs?
- What do patients find most and least useful? What motivates patients to use CAs? What are differential discontinuation rates? Why do some patients stop using CAs? What other functions are requested, are viable, and are needed most? What are patient outcomes with CAs?
- Governance, Testing, and Evaluation
- How will decisions about CA selection, deployment, and use be governed? How will performance be tested and evaluated with actual patients before deployment?
- What types of standard performance metrics and evaluations will be developed and implemented? How will desired outcomes and unanticipated or undesirable outcomes, including biases, be captured and assessed on an ongoing basis? How will these assessments be used to continue, suspend, or modify use of CAs?
- How will hazards or anomalies be detected and addressed?
- Supporting Innovation
- How can development, testing, and introduction of promising boundary-pushing technologies be balanced with the need to protect patients and address the other issues listed here?
This is all pretty new stuff. The oldest source cited by the authors came out in 2014, and only a handful are specifically about CA in healthcare. This article is a good starting place for anyone who wants to catch up with what will be a fascinating innovative process.