Algorithms are increasingly orchestrating our daily lives. From the content we consume, like news, advertisements, and entertainment recommendations, we need to have the freedom to make informed decisions. In certain cases, algorithms controlled by specific market players already determine medical treatments for patients, impact insurance premiums, and influence the quality of care provided. This phenomenon extends to the realm of HCP/pharmacist engagement, influencing care patterns and pharmaceutical sales. To truly achieve the desired omnichannel HCP experience and hyper-personalization, similar to the retail and FMCG sectors, transparency and explainability are essential components of change.
Shifting Focus from Intellectual Capital to Emotional Intelligence
Similar to the transformative impact of the industrial revolution on physical labor, the advent of AI is revolutionizing the realm of intellectual capital. This evolution highlights the growing significance of our capacity to forge meaningful connections, cultivate empathy, and master the art of effective communication through emotional intelligence. Regrettably, within the professional domain where knowledge and intellectual capital reign supreme, the critical domains of social and emotional development often fail to receive the attention they deserve.
The number of incidents concerning the misuse of AI is rapidly rising
Life sciences organizations are actively navigating the benefits vs. risks. In terms of benefits, according to the AI Index, organizations that have adopted AI report realizing meaningful cost decreases and revenue increases.
Using data from the AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository, a publicly available database, the AI Index reported that the number of incidents concerning the misuses of AI is shooting up. Within the total reported incidents, 4.75% are health related incidents across NHS Digital health data sharing opacity, health risk scoring, ovulation prediction and nutritional labelling algorithms from companies such as Apple, NHS Digital, Koko Health, United Health, IBM, Optum, Flo Health, Google, OpenAI, as well as governments and non-profits.
To address the concerns related to the rise in incidents, life sciences organizations are incorporating the process of privacy by design, which involves conducting a data protection impact assessment (DPIA) along with a proportional evaluation. The aim of a DPIA & PIA is to determine the necessity of data processing and ensure it is proportionate, while effectively mitigating and handling privacy risks associated with any AI project.
Despite the fact that in 2022, the AI focus area with the most investment was medical and healthcare ($6.1 billion), there are inherent risks associated with the reliance on language models trained on static data snapshots. These models often lack continual updates and fail to incorporate real-world context, making it challenging to effectively fact-check claims. This poses a potential threat as they may overlook counterevidence and lack the nuanced judgment that human fact-checkers possess. The role of human supervision on AI models becomes even more important in the context of technology development. Humans remain the decision makers
User Enablement through Core and Advanced Best Practices Drives Maximum Returns for Organizations
According to McKinsey & Company‘s research on the differentiators of AI outperformance, the “AI high performers” – players that attribute at least 20% of EBIT to their use of AI – report that 57% of their organization’s users are taught the basics of how the AI models work.
“Prioritize the Human Element” is an essential mantra for organizations that may have overly focused on data and technology, overlooking the importance of employee engagement and their broader responsibilities to the organization and society. Therefore context for artificial intelligence must be clearly explained: context for decisions with knowledge graphs, context for efficiency with graph accelerated ML, context for accuracy with connected feature extraction, and finally context for credibility with AI explainability.
Sales leaders often face challenges in effectively prioritizing accounts, adapting messaging to specific account contexts, and adjusting their focus as contexts evolve. The difficulty lies in analyzing vast amounts of constantly changing data to determine clear prioritization and tailor messaging to various types of healthcare professionals and their organizations. This information plays a crucial role in guiding sales representatives on who to engage with and how to approach them.
Contextualizing AI from the perspective of the Field Rep
Taking a socio-technical perspective, developing human skills is vital in today’s fast-paced world. Continuous training, including understanding AI, using new technologies for information and communication, opens doors for organizational growth, new ways to reach customers, and overall business expansion.
When adequately developed for prospective applications, computational models have the potential to produce tangible savings of time and money by systematically improving efficiency, be that in the sales representative’s engagement with the physicians or pharmacists, or the medical science liaison’s engagement with a specialist.
There isn’t an industry that will be left untouched by AI, therefore as artificial intelligence informs more decisions, companies’ AI systems must be understood by Field Reps and those affected by AI use.
Explainable AI can be split across three main categories:
- Explainable data: What data was used to train the model and why?
- Explainable Predictions: What features and weights were used for this particular prediction?
- Explainable Algorithms: What are the individual layers and the threshold used for a prediction?
Unlocking Ongoing Impact: How Life Sciences Organizations Embrace Agile AI Strategies for Growth
To ensure ongoing impact, life sciences organizations must align key AI-enabled use cases and organize delivery efficiently by identifying core requirements, potential synergies, and gaps in cross-cutting roadmaps. Given the speed of development in the technology industry, the traditionalist pharma mindset must shift from long-term cycles to a focus on quarterly value releases (QVRs), delivering measurable value after each quarterly sprint (e.g., AI-enablement of a scientific process).
By continuously reprioritize based on organizational needs and allowing for efficient development of AI use cases by prioritizing data ingestion and team capacity, life sciences organizations will be able to deploy mission-critical assets as necessary. See here further details on a case study from McKinsey on AI in biopharma research.
Technology is all but a tool
In conclusion, contextualization is the paramount endeavor when integrating a new technology into diverse ecosystems. The daunting challenge lies in defining the suitable context for a model to deliver dependable answers. The approach of engagement and AI usage significantly changes from a clinical perspective to a commercial model, however the objective must remain aligned to understandability and ease of use. In the end, technology is all but a tool.