News | Oct. 30, 2023

It’s Not Just About the Algorithm: Development of a Joint Medical Artificial Intelligence Capability

By Benjamin P. Donham Joint Force Quarterly 111

Download PDF

Lieutenant Colonel Benjamin P. Donham, USA, wrote this essay while a student at the U.S. Army War College. It tied for first place in the Strategic Research Paper category of the 2023 Chairman of the Joint Chiefs of Staff Strategic Essay Competition.
Navy Hospital Corpsman Second Class Jeffrey Ortberg, center, with Marine Medium Tiltrotor Squadron 265 (Reinforced), 31st Marine Expeditionary Unit, asks for assistance on simulated casualty during mass casualty exercise aboard amphibious assault ship USS America, Pacific Ocean, June 19, 2023 (U.S. Marine Corps/Christopher R. Lape)

Recent advances in artificial intelligence (AI) have highlighted the sophisticated potential of this technology to drastically improve all aspects of medicine. As the joint force prepares for large-scale combat operations, the number of anticipated casualties will greatly exceed available medical resources. Artificial intelligence has the promise of significantly improving many aspects of combat casualty care, including maximizing the impact of limited medical capabilities. However, because of the military’s unique operating environment, the military health system cannot rely on civilian medicine to develop AI capabilities that will be directly applicable to combat casualty care. Given this, the military health system needs to develop a strategic approach to the generation of a medical AI capability for the joint force.

To accomplish this, the military health system first needs to establish a medical AI cross-functional team, which would set the conditions for future capability development. This cross-functional team would then need to develop a common data dictionary and assist the military health system in its transition into a digital organization that passively collects a large amount of high-quality data. Once this infrastructure is established, the focus should then shift toward developing algorithms to support evacuation platform choices, geographic allocation of medical units, predictive Class VIII resupply, and the critical development of a high-quality mass casualty triage algorithm. Implementation of a dedicated strategy to develop a medical AI capability has the potential to significantly improve combat casualty care and reduce strategic risk to the joint force.

Hypothetical Vignette

It is the year 2028, and a large-scale combat operation has broken out between the United States and a hostile force. A U.S. Marine infantry platoon is maneuvering to attack an enemy objective. Hovering above the Marines are autonomous loitering munitions also known as armed drones. Using artificial intelligence, these drones rapidly identify the platoon and recognize them as Marines. Then, without a human in the decisionmaking cycle, multiple loitering munitions strike the platoon, critically wounding 40 Marines. The lone uninjured corpsman, far from medical assistance, is overwhelmed by the number of casualties. Whom should the corpsman treat, and in what order? Who can go back to the fight, and who needs to be evacuated? Humans can only effectively manage and retain four to seven items of information in their working memory at a time.1 Given this, it is no surprise the overwhelmed corpsman is unable to effectively prioritize, triage, treat, and evacuate the wounded. Multiple Marines with potentially survivable wounds succumb to their injuries because of a lack of treatment. This scene is captured via high-definition video by a remaining drone conducting battle damage assessment. As part of the enemy’s psychological operations campaign, this video is rapidly disseminated via social media to the American public with the intent of weakening U.S. strategic resolve.

While this hypothetical vignette is jarring, it reflects the battlefield capability of current times. The recent development of powerful AI tools offers the potential to fight back. It also has the potential to revolutionize battlefield medicine, improve casualty care, and reduce the joint force’s strategic risk from many casualties during large-scale combat operations.

Background

The convergence of increased computer processing power, ubiquitous data collection, and increased sophistication of computer science has led to the fourth industrial revolution. The distinguishing feature of this revolution is the merging of physical and digital systems resulting in the creation of intelligent and autonomous systems. A key component of this revolution is artificial intelligence, which is defined as the ability of systems to acquire their own knowledge, as opposed to relying on hard-coded knowledge, by extracting patterns from raw data.2 As a central component of the fourth industrial revolution, AI is already impacting all aspects of society, from art to politics.

The strength of artificial intelligence is that it can process vast amounts of data quickly and accurately. This allows AI to identify patterns too complex for humans to identify, and, in certain domains, AI’s decision quality is surpassing that of humans. For example, in 2016 an AI system called AlphaGo defeated the world champion in the ancient Chinese board game Go.3 Prior to this, many believed that because of the game’s complexity, Go was an unbeatable game for machines.

Artificial intelligence also has the potential to significantly improve health care by improving many aspects of medicine, including drug development, radiological interpretation, patient monitoring, documentation, and more. Artificial intelligence can already accurately interpret chest radiographs,4 and the potential of AI in image interpretations is so great that some have even speculated that it might end the specialty of radiology, which is responsible for medical image interpretations.5 Also, AI can speed up the drug discovery process by analyzing vast amounts of data to identify potential drug candidates and predict their efficacy.6 Furthermore, AI-powered wearables/remote monitoring systems can track patient vital signs and alert healthcare providers to potential health issues before they become critical.7 Last, AI can automate administrative tasks such as scheduling appointments, managing medical records, and handling insurance claims, freeing up healthcare professionals to focus on patient care.8

While AI has the potential to transform medicine, it also presents some distinct challenges, such as data privacy, algorithmic bias, and ethical considerations of an algorithm making high-consequence decisions. Even so, with proper regulation and oversight, AI has the potential to improve patient outcomes and improve healthcare delivery. Given this exciting scenario, it is critical that the military health system establish the infrastructure required to take advantage of this emerging technology.

The major challenge facing the military health system during future conflicts will be how to apply a limited medical capability to an overwhelming number of casualties. Central to optimizing the use of limited medical assets during large-scale combat operations will be greater use of AI. Artificial intelligence could assist with decisionmaking across a range of medical capabilities. While AI is not a panacea to solve all problems, and even though several barriers remain before this concept can become reality, AI has great potential to optimize medical operations at scale.

Operational Environment

Given the high lethality and precision of current munitions, large-scale combat operations are expected to generate far more casualties than the military’s medical system can treat. For example, current estimates predict that an Army corps of 90,000 Soldiers would face about 50,000 casualties over 8 days of fighting.9 For context, a corps has an organic medical brigade with about 350 hospital beds. Even when one considers additional Role I–II capacity (battalion aid stations/medical company) and potential augmentation with Reserve medical units, it is clear the demand for medical capability, both beds and medical assistance, will vastly exceed capacity. The central problem facing the military health system is scale; the number of casualties requiring treatment will outstrip the capacity of available medical resources.

This fundamental mismatch will lead to significant strategic risk. If operational units are inundated with casualties they cannot evacuate, there is a higher likelihood these units will no longer be able to perform offensive operations. Additionally, the American public has become conditioned to believe that every injured Servicemember will receive high-quality medical care. The strategic will of the public could be shaken by images of injured Servicemembers dying before receiving medical care. For example, antiwar protests in the United States significantly increased in 1968 after the Tet Offensive. At this point in the Vietnam conflict, the United States had taken 30,484 fatalities, which, for context, is about 20,000 fewer deaths than expected after 8 days of combat for an Army corps–size element during large-scale combat operations.10 Taken together, the potential lack of quality battlefield care could lead to significant strategic risk to both the mission and the force.

Air Force medics treat simulated patient during Medic Rodeo at Melrose Air Force Range’s Training Area 3B, New Mexico, August 23, 2023 (U.S. Air Force/Elora J. McCutcheon)

Overview

Artificial intelligence has been around for decades, but in the last 5 years, its ability to solve complex problems has improved markedly. Recent advances in computing power, combined with massive increases in data collection, have facilitated rapid advances. One of AI’s emerging strengths is its ability to decipher large, complex problems that are difficult for humans to solve. For example, Google developed an AI algorithm using 128,000 retinal photographs that could not only diagnose diabetic retinopathy more accurately than fellowship-trained ophthalmologists11 but could also accurately predict the risk of cardiovascular events.12 To the surprise of the Google engineers who design AI algorithms, it could accurately predict a patient’s sex.13 This highlights AI’s ability to find patterns in data that, because of scale, humans cannot perceive.

Artificial intelligence is a generic term that encompasses many related but different techniques. At its basic level, all AI finds patterns and answers questions. It is designed to take input data, apply an algorithm, and then produce output data, often predicting the probability of a specific outcome. At one end of the AI spectrum are expert systems designed to mimic decisions a human expert would make. At the other end are the most sophisticated, powerful, and complex AI technologies, including machine learning, deep learning, and neural networks. Because of this complexity, deep learning models require massive amounts of data and computing power to develop. Manual data entry systems typically cannot keep up.

For example, many of the most common deep learning models of image classification use the ImageNet database, which contains 14 million labeled images and is 150 gigabytes in size.14 Frequently, deep learning models use transfer learning, in which a model incorporates knowledge gained from training on a different but related task to decrease the data required to learn a new task. Even with the use of transfer learning, however, deep learning algorithms demand gigabytes of data. For context, one of the most famous deep learning models, ChatGPT, was trained using a 570-gigabyte data set.15 In comparison, the entire Joint Trauma System Department of Defense (DOD) Trauma Registry, including 15 years of patient data, is only 0.017 gigabytes.16 This is orders of magnitude less data than what is needed for the simplest deep learning algorithm developed using transfer learning.

Army Captain Ashley Sarlo, critical care nurse attached to 240th Forward Resuscitative Surgical Detachment, simulates experimental postoperative critical care at Camp Grayling, Michigan, August 12, 2023, during Northern Strike 2023 (U.S. Air National Guard/Jacob Cessna)

Establishing a Cross-Functional Medical AI Team

Because of the unique combination of specialties required to develop AI, DOD should develop a cross-functional team dedicated to facilitating the development of medical AI. The team should be composed of a data scientist, a provider experienced in battlefield medicine, and a computer scientist with expertise in AI. In addition to skills in algorithm development, this team needs expertise in fields relevant to operationalizing this technology and therefore should include individuals with expertise in the joint communications infrastructure.

Artificial intelligence’s potential to improve decisions in casualty care and medical resource allocation is broadly applicable across each Service’s medical department. Although each Service will have unique AI capability needs, establishing baseline medical AI capability has the potential to significantly benefit the entire joint force. Each Service is struggling with how to distribute limited medical resources most effectively to an overwhelming number of casualties expected during large-scale combat operations. A cross-functional team would help coordinate efforts across Services and reduce redundancy. The Defense Health Agency is the most appropriate location for such a team, given that it is responsible for executing the Defense Health Program appropriation.

Several organizations, including the Defense Advanced Research Projects Agency; MIT Lincoln Laboratory; U.S. Army Natick Soldier Systems Center; Joint Program Executive Office for Chemical, Biological, Radiological, and Nuclear Defense; and others are working on medical AI capability. However, no single organization is responsible for coordinating and synchronizing this development effort. The establishment of a common medical data infrastructure would have a synergistic effect on the development of medical AI capability and would benefit all organizations working this problem set.

Data Infrastructure and Collection

Developing an effective AI model requires high-quality data. The adage “garbage in, garbage out” highlights the necessity for AI development of trustworthy data specific to the problem of interest. There is simply no substitute.

To generate high-quality data, the joint medical community first needs to develop a common and standardized medical data infrastructure to establish a framework for future medical AI development. Commonly referred to as a data dictionary or schema, this data infrastructure would standardize data collection and allow different data sets to be interoperable. Essentially, such a schema would provide a common language among data sets that would contain a structured description of the data, including its format, meaning, relationships, and other attributes. Such a data dictionary is an essential tool to ensure that data is used consistently and accurately. For example, the terms chest tube, tube thoracotomy, and chest drain are all commonly used to describe the same battlefield procedure. However, without a common data dictionary that defines their equivalence, the interoperability between different data sets using different terms for the same procedure would be significantly limited. Establishing a clear and structured understanding of the relationship among different data sets as part of a data schema is critical for establishing the foundation of future effective medical AI development.

Once a coordinated data scheme is developed, the military health system must transition from industrial-age manual data entry practices to those of the digital age, which allow data to be passively and continuously collected. Currently, most of the data the military health system collects is entered by hand. For example, if a deployed Servicemember were injured today, his or her clinical information would be recorded by hand into DD Form 1380, “Tactical Combat Casualty Care.” Additional clinical information would be manually entered into the Armed Forces Health Longitudinal Technology Application–Theater (AHLTA-T). Other information, such as medical logistics resupply requirements, hospital bed status, and available units of blood, are entered manually into legacy products such as Excel or PowerPoint.

Moving beyond this antiquated system will require developing and investing in a system of passive and continuous digital data collection from the bottom up, so human data entry is not required. For example, instead of manually inputting medical supply inventories into the Medical Materiel Mobilization Planning Tool, an AI image-recognition algorithm could use video of Class VIII storage to automatically update quantities of medical supplies on hand in the system of record. Future replacements for DD Form 1380 and AHLTA-T could be designed to passively record heart rate, blood pressure, and oxygen saturation.

A particularly important aspect of building this data infrastructure relies on developing wearable technology. Wearables are small electronic devices that track physiologic parameters such as heart rate, sleep, and movement. Collecting individual physiologic data is critical for understanding Servicemember baselines, which allows for optimization of performance and medical treatment. Currently, DOD is developing the Health Readiness and Performance System (HRAPS) to provide actionable information regarding the operational physiology of troops to their unit leadership. Although this program focuses primarily on human performance and injury prevention, it is based on technology that has the potential to be a powerful facilitator of AI and medical care.

Collection of real-time physiologic data for each individual could create a physiologic baseline in the event of an injury and could use AI analysis to assist in medical care. Collectively, this data could be used to rapidly identify mass casualty events, recognizing, for example, the nature and impact of chemical or biological weapons use. The collective physiologic data could also be used to inform the medical common operating picture by using AI to predict where to optimally place medical resources.

Establishing this medical data infrastructure will not be easy. Healthcare data privacy concerns, operational security requirements, and communications constraints will provide barriers. But these important constraints can be, and must be, overcome for the military health system data infrastructure to take advantage of the full potential of AI. The military health system data system needs to be transformed into something like Amazon, a system that passively collects large amounts of high-quality data and then uses an AI model to provide accurate predictions facilitating high-quality decisions. This in turn reduces waste and maximizes the impact of limited resources.

Algorithm Development

Once an appropriate data infrastructure is in place, multiple operationally specific AI algorithms could be developed to address the multitude of challenges facing the military health system during large-scale combat operations. These potential future applications of AI are described in Army Futures Command Medical Concept 2028:

AI-enabled MEDCOP [Medical Common Operating Picture] will rapidly receive, organize, analyze, interpret, and display contextually relevant information and generate risk-informed recommendations that comprehensively consider the use of Army and UAP [Unified Action Partners] capabilities. . . .

AI-enabled collaboration, decision-support, and casualty management systems enable medical regulating forward of the division rear boundary and the identification of expected MEDEVAC arrival, area medical capabilities and statuses, and expected arrival of medical resupply.17

While there are many potential beneficial uses of AI to assist with combat casualty care, the priority should be to develop a clinical algorithm to assist with point of injury mass casualty triage. Given large-scale combat operations’ limited medical assets, it is imperative that we are able to accurately triage wounded Servicemembers. This would not only improve critical medical care but also allow Servicemembers to return to duty after the lowest level of appropriate care. This would be a cultural change from the global war on terror, during which those with minimal injuries were evacuated to limit their risk.

The scale of triage needed after future battles is likely to be so large that human decisionmaking will fall short. AI will speed and sharpen this decisionmaking process, and it will allow medical officers to accurately determine medical evacuation priorities. In certain instances, it may be necessary to adjust the risk acceptance of the algorithm so that commanders conserve as much combat power as possible, even at higher risks. Risk acceptance in combat is not static, and users will need the ability to adjust the risk acceptance of a triage algorithm.

We know that clinicians can make accurate triage decisions based on the appearance of a patient and other limited data.18 We also know that AI is particularly effective with image and voice data. Given this knowledge, there is a high likelihood that an accurate triage algorithm could be developed using short audio/video recordings combined with vital sign data from wearables. Optimally, this could be developed by civilian partners who frequently see a high volume of trauma patients. It could be collected with time-dependent outcome data. This algorithmic development could occur concurrently with the development of the medical data infrastructure. One could envision a Servicemember at the point of injury quickly taking audio/video recordings of multiple injured troops. His or her Nett Warrior device (an integrated dismounted leader situational awareness system), equipped with an AI algorithm, could then rapidly inform him or her who can return to duty, who requires evacuation, and the location where they need to be evacuated.

Army Soldiers from 16th Combat Aviation Brigade, 7th Infantry Division, conduct medical evacuation training during exercise Super Garuda Shield 2023, in Puslatpur, Indonesia, August 30, 2023 (U.S. Army/Wyatt Moore)

Special Considerations With AI Development

The AI community is currently struggling with several issues of particular importance to military health care. These include the transparency and bias of algorithms, the level of AI autonomy in high-consequence decisions, and who ultimately assumes the risk if AI fails while making a high-consequence decision. It is important to keep these in mind while building a future AI infrastructure.

A transparent AI system should be explainable, interpretable, and understandable to human users. However, some of the most advanced AI models, such as ChatGPT and DALL-E, are developed using deep learning neural networks, which are complex and highly interconnected systems that can be difficult to interpret. These deep neural network AIs are considered “black box” AI because they operate in an opaque or “hidden” manner. These systems’ decisionmaking processes are not easily understood or explained by humans. In a black box AI system, input data is fed into the system, and output is generated without any clear understanding of how the system arrived at its decision or conclusion. This lack of transparency can be a major problem in high-consequence medical decisions because it can make it difficult to identify errors or biases. It is therefore critical to invest in developing techniques to make even the most complex AI systems more transparent and interpretable.

In addition to transparency, bias in AI requires special consideration. Bias in AI refers to the tendency of algorithms to produce results that systematically and consistently discriminate against certain individuals or groups. The majority of AI systems are trained using a technique called supervised learning where AI uses data that includes predictors and responses to learn about a specific topic. If the data used to train AI contains biases or reflects social inequalities, then the AI system will learn and perpetuate those biases. For example, AI trained to detect melanoma using imaging data sets that only include light-skinned patients will be biased and less effective detecting melanoma in individuals with a darker skin color.19 The military needs to scrutinize the AI it develops to ensure it is not encoding existing bias. The high consequence of medical decisions demands that the military holds its medical AI to a higher standard than other fields.

The use of AI in medicine is different from other fields where AI is used because of the high consequence of medical decisions. AI systems can be categorized into different levels of autonomy based on the degree to which they operate independently of human input and supervision. The spectrum of autonomy can range from partial autonomy, with humans still in the loop, to fully autonomous systems that operate without any direct human interaction. It is still an open question how much autonomy to grant AI to make high-consequence medical decisions. Many authors believe that for high-consequence decisions, AI should never be fully automated, but should be used to inform the decisions of humans who remain in control. This form of partial automation is commonly referred to as Centaur AI, half-human/half-machine.

Additionally, the military must decide who owns the risk when AI fails. Just like any new technology, AI will not work perfectly at its inception. Given the difficulty of replicating battlefield injury conditions, the military likely will face a large learning curve when it first uses AI on the battlefield. One can envision a scenario in which an isolated, overwhelmed junior medic performing mass casualty triage would defer to a partially automated AI algorithm. If that happens and lives were lost because of an error in an AI system, who would be held responsible? The commander, the Servicemember, the Defense Health Agency, or DOD? These are difficult and complex questions that the military has not fully thought through.

Last, the U.S. Government needs to own these algorithms because AI systems will require rapid adjustments. If the development of medical AI algorithms is contracted out to a nongovernmental company, the military health system will not be able to adjust this technology with the speed and flexibility needed. In general, statement of work changes on government contracts go through a formal process that involves several steps, and this process can take anywhere from a few weeks to several months or more. Effective AI requires iterative development, and the algorithms require constant updating. The speed required for effective AI development is on the order of hours to days, not weeks to months. Because of this, development needs to be owned by the U.S. Government, even if it is developed by contracted personnel. While this might be more time-consuming up front, the speed and flexibility this approach enables are critical to the overall success of developing a military medical AI capability.

Airman with 96th Medical Group provides aid to simulated victim during scenario for Tactical Combat Casualty Care training, November 17, 2022, at Eglin Air Force Base, Florida (U.S. Air Force/Samuel King, Jr.)

Communications Synchronization

Once medical AI systems have been developed, they still need to be incorporated into the joint communications infrastructure. For instance, in a battlefield mass casualty event, tactical sensors would gather patient data and send it to a medic’s Nett Warrior. The medic could then use an AI triage algorithm to analyze the data. Then the cumulative data from the mass casualty would be transmitted into the medical common operating picture where AI could be used to predict which medevac assets are needed, where to position additional medical capacity, and how to predictively push Class VIII resupply. At any point in this communications chain, an effective AI algorithm could easily malfunction due to a lack of a data transfer. Given this, it is critical that AI infrastructure development be fully nested within the established joint communications infrastructure.

Without seamless integration with the overall joint communications and data infrastructure, AI will not be effective. Additionally, bandwidth will be limited in large-scale combat operations, so it is critical that low-bandwidth data solutions are explored. One solution might be edge computing, in which a distributed computing paradigm brings computation and data storage closer to the sources of data. For example, it might exist on the physiologic sensor such as the HRAPS itself, instead of relying on centralized cloud servers. Edge computing is a powerful tool that could help the joint force operate in contested communications environments, enabling faster, more secure, and more efficient data processing and analysis at the edge of the network. Because of this, edge computing options should be explored and incorporated into the overall medical AI infrastructure.

Alternate Ending to Hypothetical Vignette

It is 2028, and large-scale combat operations have broken out again. However, the military health system took advantage of the last 5 years and aggressively developed AI capabilities, improving the ability to take care of wounded Servicemembers. After the initial casualty event, the corpsman releases several small unmanned aerial vehicles. These drones use onboard sensors to identify wounded personnel and collect physiologic, movement, audio, visual, and location data on them. This information is combined with individual physiology data, including physiologic baseline from wearable sensors, and sent to the corpsman’s body armor–mounted Nett Warrior phone. An AI algorithm then takes that data and instantly triages the injured Marines, rapidly identifying those Marines who require lifesaving interventions. While the corpsman is treating those Servicemembers with the most time-sensitive injuries, the cumulative data is being transmitted back to the joint operations center. There, additional AI algorithms determine the type and number of evacuation platforms needed to evacuate the wounded. Other algorithms are used to determine where to reposition available medical assets on the battlefield to respond effectively. Class VIII expenditure also is predicted by AI, and resupply is pushed to units in need before any request is received. Through the effective use of this new technology, limited medical assets are effectively applied to a large number of casualties. This limits battlefield morbidity/mortality and also reduces the impact of casualties on maneuver forces, facilitating the additional combat power that is critical during large-scale combat operations.

Artificial intelligence has the potential to drastically improve military medicine’s ability to care for combat casualties during large-scale combat operations. To take full advantage of this technology, the military health system needs to take a comprehensive approach to developing its infrastructure. First, a cross-functional team with data scientists, computer scientists, communications experts, and providers with domain expertise in battlefield medicine needs to be established to set the conditions for the development of future medical AI capability. Once this cross-functional team is established, it needs to develop a common data dictionary that will allow standardization of data sets and facilitate consolidation of different data sets. Concurrently, the military health system needs to transition from analog data organization into digital organization where high-quality, passively collected medical data is fully incorporated into joint communications systems. Once this underlying data infrastructure is established, it will set the conditions for the development of a wide variety of medical AI capabilities. Although there are many applications where medical AI could improve battlefield medicine, including decision support for evacuation platform choices, geographic allocation of medical units, and supply of medical equipment and consumables, a special emphasis should be placed on developing a high-quality mass casualty triage algorithm. This algorithm is critically needed during large-scale combat operations to maximize the impact of medical care and to aggressively return Servicemembers to duty at the lowest echelons of care. Implementation of these elements would greatly increase the joint force’s ability to take advantage of the powerful potential of AI, significantly improve combat casualty care, and reduce overall strategic risk. JFQ

Notes

1 G.A. Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” Psychological Review 101, no. 2 (April 1994), 343–352.

2 Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning (Cambridge, MA: MIT Press, 2016).

3 David Silver et al., “Mastering the Game of Go with Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (January 28, 2016), 484–489.

4 Paras Lakhani and Baskaran Sundaram, “Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks,” Radiology 284, no. 2 (August 2017), 574–582.

5 Katie Chockley and Ezekiel Emanuel, “The End of Radiology? Three Threats to the Future Practice of Radiology,” Journal of the American College of Radiology 13, no. 12, pt. A (December 2016), 1415–1420.

6 Nariman Noorbakhsh-Sabet et al., “Artificial Intelligence Transforms the Future of Health Care,” The American Journal of Medicine 132, no. 7 (July 2019), 795–801.

7 Daniele Ravi et al., “Deep Learning for Health Informatics,” IEEE Journal of Biomedical and Health Informatics 21, no. 1 (January 2017), 4–21, https://doi.org/10.1109/JBHI.2016.2636665.

8 Noorbakhsh-Sabet et al., “Artificial Intelligence Transforms the Future of Health Care.”

9 Matthew Fandre, “Medical Changes Needed for Large-Scale Combat Operations,” Military Review, May 2020, 36–45, https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2020/Fandre-Medical-Changes/.

10 “Vietnam War U.S. Military Fatal Casualties Statistics,” National Archives and Records Administration, https://www.archives.gov/research/military/vietnam-war/casualty-statistics.

11 Varun Gulshan et al., “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs,” JAMA 316, no. 22 (December 13, 2016), 2402–2410, https://doi.org/10.1001/jama.2016.17216.

12 Ryan Poplin et al., “Prediction of Cardiovascular Risk Factors from Retinal Fundus Photographs via Deep Learning,” Nature Biomedical Engineering 2, no. 3 (March 2018), 158–164.

13 Bjorn Kaijun Betzler et al., “Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-Sectional Study,” JMIR Medical Informatics 9, no. 8 (August 2021), https://doi.org/10.2196/25165.

14 Jia Deng et al., “ImageNet: A Large-Scale Hierarchical Image Database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, June 22, 2009, 248–255.

15 Alex Hughes, “ChatGPT: Everything You Need to Know About OpenAI’s GPT-4 Tool,” BBC Science Focus, June 6, 2023, https://www.sciencefocus.com/future-technology/gpt-3/ “GPT-4 Is OpenAI’s Most Advanced System, Producing Safer and More Useful Responses,” OpenAI, https://openai.com/gpt-4.

16 Nicholas D. Drakos, personal communication with author on size of the Joint Trauma System Department of Defense Trauma Registry, n.d.

17 Army Futures Command Pamphlet 71-20-12, Army Futures Command Concept for Medical 2028 (Washington, DC: Headquarters Department of the Army, March 4, 2022), 11.

18 Jeffrey Wiswell, “‘Sick’ or ‘Not-Sick’: Accuracy of System 1 Diagnostic Reasoning for the Prediction of Disposition and Acuity in Patients Presenting to an Academic ED,” American Journal of Emergency Medicine 31 (2013), 1448–1452.

19 Giona Kleinberg et al., “Racial Underrepresentation in Dermatological Datasets Leads to Biased Machine Learning Models and Inequitable Healthcare,” Journal of Biomed Research 3, no. 1 (2022), 42–47.