As the United States military transitions towards the integration of generative artificial intelligence (GenAI), the Marine Corps has unveiled a policy aimed at methodically guiding the implementation of this emerging technology. Issued on December 4, instruction NAVMC 5239.1 establishes a framework designed to address the dual perspectives surrounding GenAI — balancing caution with the potential benefits of its capabilities.
One of the primary objectives of the new directive is to instil a sense of responsibility among Marine Corps personnel regarding the use of GenAI tools. The policy explicitly warns users of the dangers associated with these systems, stating, “System users should distrust and verify all outputs prior to use.” This caution addresses the phenomena often referred to as “hallucinations,” where AI models generate inaccurate or fabricated information.
Despite these warnings, the memo also encourages an exploratory approach. It argues against outright bans on the use of GenAI technologies, suggesting instead that commands should “develop comprehensive governance processes that thoughtfully balance the benefits of GenAI tools and capabilities with potential risks.” By doing so, the Marine Corps aims to ensure that the usage of these tools aligns with broader organisational goals while maintaining operational security.
The policy outlines specific responsibilities for commanders concerning GenAI integration, with a timeline set for an increasing focus on these technologies by 2025. Highlights from the four-page memorandum include directives for commands to identify and oversee GenAI developers, system owners, and users. This initiative is intended to mitigate the risks associated with adopting GenAI into workflows. The guidance signifies a growing recognition of the importance of knowing who is using the technology and understanding where the AI systems originate, particularly as there has been a rise in third-party companies repackaging publicly available models, which can pose security vulnerabilities.
Furthermore, the guidelines mandate that commands employ established risk assessment frameworks before venturing into the use of GenAI systems. These frameworks include the Department of Defence (DoD) Responsible AI Toolkit and the Risk Management Framework from the National Institute of Standards and Technology. The directive emphasises that experimentation with GenAI should not be haphazard; instead, there should be structured planning to adhere to best practices and to minimise risks.
Another key component of the new Marine Corps directive is the requirement for tracking and managing AI tools. Commands must document which AI tools are in development and the intended applications for these technologies, ensuring adherence to the five DoD AI Ethical Principles — responsibility, equity, traceability, reliability, and governance.
The establishment of GenAI task forces or cells is also an integral part of this initiative. These interdisciplinary groups are meant to evaluate existing GenAI technologies and determine their suitability for various operational needs. The memo states, “Commands will establish an AI Task Forces/Cells consisting of various data, knowledge management, AI and digital operations subject matter experts.” The outcome of their assessments will ultimately inform a list of preferred GenAI capabilities tailored to meet the mission-specific requirements of the Marine Corps.
Further details regarding the assignment of these task forces and associated policies are anticipated in a forthcoming memorandum. As the military progresses toward the careful integration of generative AI technologies, the newly issued instructions reflect a strategic approach designed to harness the potential of AI while safeguarding against its inherent risks.
Source: Noah Wire Services