The use of artificial intelligence (AI)-based technologies, such as AI-based decision support systems (AI-DSSs), holds significant promise in sustaining and improving the quality and efficiency of long-term care (LTC) for older adults. With the growing prevalence of digital monitoring and care technologies, AI-DSSs can harness these data to proactively support nurses, care coordinators, and other professionals in their decision-making throughout the nursing process. From early identification of care needs to the planning and implementation of personalized care strategies, AI-DSSs offer the potential to elevate the capabilities of caregivers.
However, the deployment of AI-DSSs in LTC also creates ethical and social challenges. The extensive gathering of personal data and the pivotal role of algorithms in interpreting these data to arrive at care-related decisions raise concerns around privacy, autonomy, and potential biases. To responsibly embed these technologies in practice, it is crucial to understand the perspectives of nurses and other professional stakeholders in LTC on the opportunities and risks of AI-assisted decision-making.
This qualitative study explored prerequisites for responsible AI-assisted decision-making in nursing practice from the viewpoints of 24 care professionals in the Netherlands, including nurses, care coordinators, data specialists, and care centralists. Through in-depth interviews, the researchers sought to uncover the nuanced interplay of positive and negative perceptions that shape the stance of these stakeholders toward the use of AI-DSSs in LTC.
Opportunities of AI-Assisted Decision-Making
The care professionals recognized several potential upsides of AI-DSSs in nursing practice. AI-DSSs were seen as enablers of remote and early anticipation of care needs by harnessing data from various digital monitoring technologies. These systems could swiftly uncover overlooked issues or emerging trends related to the health, well-being, or behavior of individual clients. “The data generated by these technologies can provide insights into the changing care needs of specific clients,” shared one nurse. “AI could enable and optimize the use of these increasing amounts of data, enhancing the already implemented forms of remote monitoring.”
Furthermore, AI-DSSs were expected to facilitate adaptive, data-informed decision-making about personalized care strategies. As one care coordinator stated, “AI-DSSs could act as a type of personal coach, mentor, or advisor, offering inspiration or evidence for tailored interventions and helping caregivers evaluate the suitability of certain approaches.” This was seen as particularly valuable for less experienced caregivers or temporary staff who may overlook crucial aspects.
Lastly, the care professionals anticipated that AI-DSSs could alleviate the cognitive load of caregivers and improve their work experience. By automating repetitive, data-intensive tasks, these systems could free up time for more empathetic and nuanced decision-making about person-centered care.
Risks of AI-Assisted Decision-Making
Despite the perceived opportunities, the care professionals also expressed a diverse array of concerns about the risks of AI-DSSs in nursing practice. A prominent worry was the potential over-reliance of caregivers on the outputs of these systems, which could diminish their capacity for independent decision-making and critical thinking. “Caregivers who rely heavily on AI-DSSs may insufficiently consider broader contextual factors or crucial nuances in the characteristics and needs of individual clients,” explained one nurse.
There were also concerns about the privacy implications of extensive data collection, potential misuse of personal information, and the opacity of AI algorithms, which could undermine the trust and confidence of clients and caregivers. As one participant stated, “Shifts toward data- and AI-assisted remote care might not be widely accepted, and questions arise about the extent to which enforcing these changes on hesitant stakeholders can be justified.”
Furthermore, the care professionals cautioned against the risk of AI-DSSs perpetuating or exacerbating biases, leading to the over-problematization and stigmatization of old age. “Certain people and care needs might not be adequately represented in the data and rules that are fed to AI-DSSs, causing these systems to flag numerous issues as potentially problematic,” explained a care coordinator.
Prerequisites for Responsible AI-Assisted Decision-Making
To optimally balance the opportunities and risks of AI-assisted decision-making, the care professionals identified seven interrelated categories of prerequisites for responsible innovation in this area:
Regular deliberation on data collection: Specific data and associated AI-based insights should be generated only in accordance with established goals agreed upon by key stakeholders, including clients. The collection and use of data should be proactively balanced against potential harms, such as privacy infringement and the over-problematization of old age.
Balanced proactive nature of AI-DSSs: While AI-DSSs should ease data-intensive analytical tasks, the automation of decision-making in nursing practice should be avoided. These systems should provide inspiration and evidence for care strategies, but caregivers should retain responsibility for developing person-centered approaches.
Incremental advancements aligned with trust and experience: The operation and use of AI-DSSs should provisionally not entail excessive complexity or opacity, allowing users to gradually build trust as the systems prove their value in practice. Significant adjustments to algorithms and underlying logic should be extensively tested before broader deployment.
Customization for all user groups: The design and implementation of AI-DSSs should be tailored to the specific needs and capabilities of clients, nurses, and other caregivers, rather than adopting a one-size-fits-all approach.
Measures to counteract bias and narrow perspectives: Transparency should be provided about the functioning of AI-DSSs, and outputs should be framed as advice rather than compelling information. Contextual information about client characteristics should be incorporated to provide a broader perspective on the relevance of AI-generated insights.
Human-centric learning loops: Caregivers should be involved in both the design of AI-DSSs and their implementation and use in practice, contributing their domain-specific knowledge and assisting in the refinement of these systems based on user feedback.
Routinization of using AI-DSSs: Consulting AI-DSSs should become the norm in nursing practice as more evidence emerges about their added value. Caregivers should have the freedom to deviate from or disregard the outputs of these systems, provided they do so thoughtfully and comprehensively report their decisions.
By considering these interconnected prerequisites, various actors, including developers and users of AI-DSSs, can cohesively address the different factors important to the responsible embedding of these technologies in LTC. As one data specialist emphasized, “Responsible AI-assisted decision-making requires an approach that extends beyond merely the design and technical aspects of AI-DSSs. The development and use of these systems should be supported by caregivers capable of adeptly interacting with the technologies.”
The perspectives of nurses and other LTC professionals reveal that the opportunities of AI-assisted decision-making could turn into drawbacks depending on the specific shaping of the design and deployment of AI-DSSs. Therefore, the responsible use of these systems should be viewed as a balancing act, with continuous refinement of the ways in which AI supports the nursing process and interacts with caregivers and other stakeholders.