XC3 Weaponlight

Archive for the ‘AI / ML’ Category

AI in Battle Management: A Collaborative Effort Across Borders

Thursday, January 8th, 2026

The 2025 series of the Decision Advantage Sprint for Human-Machine Teaming marked a significant step forward in the integration of artificial intelligence and machine learning into battle management operations. Through a series of groundbreaking experiments, including the recent DASH 3 iteration, the U.S. Air Force, alongside its coalition partners, Canada and the United Kingdom, tested and refined AI’s potential to enhance decision-making, improve operational efficiency, and strengthen interoperability in the face of growing global security challenges.

Held at the unclassified location of the Shadow Operations Center-Nellis in downtown Las Vegas, DASH 3 set the stage for this collaboration, led by the Advanced Battle Management System Cross-Functional Team. The experiment was executed in partnership with the Air Force Research Lab’s 711th Human Performance Wing, U.S. Space Force, and the 805th Combat Training Squadron, also known as the ShOC-N, further solidifying the commitment to advancing battle management capabilities for the future.

AI Integration into Operational Decision-Making

In the third iteration of the DASH series seven teams, six from industry teams and one from the ShOC-N innovation team partnered with U.S., Canadian, and U.K. operators to test a range of decision advantage tools aimed at enhancing the rapid and effective generation of battle course of actions with multiple paths. The goal of a Battle COA is to map sequences of actions that align with the commander’s intent while overcoming the complexities of modern warfare, including the fog and friction of battle. Examples of Battle COAs include recommended solutions for long-range kill chains, electromagnetic battle management problems, space and cyber challenges, or agile combat employment such as re-basing aircraft.

U.S. Air Force Col. John Ohlund, ABMS Cross Functional Team lead overseeing capability development, explained the importance of flexibility in COA generation: “For example, a bomber may be able to attack from multiple avenues of approach, each presenting unique risks and requires different supporting assets such as cyber, ISR [intelligence, surveillance, and reconnaissance], refueling, and air defense suppression. Machines can generate multiple paths, supporting assets, compounding uncertainties, timing, and more. Machines provide a rich solution space where many COAs are explored, but only some are executed, ensuring options remain open as the situation develops.”

This ability to explore multiple COAs simultaneously allows for faster adaptation to unforeseen challenges and provides operators with diverse strategies to act upon as the situation unfolds. AI’s integration into this process aims to not only speed up the decision-making cycle but also increase the quality of the solutions generated.

AI Speeds Decision Advantage

The speed at which AI systems can generate actionable recommendations is proving to be a game-changer in the decision-making process. Transitioning from the manual creation of COAs that once took minutes or tens of minutes to producing viable options in just tens of seconds was identified as a radical advantage in combat scenarios. Initial results from the DASH 3 experiment show the power of AI in enabling faster, more efficient decision-making.

“AI systems demonstrated the ability to generate multi-domain COAs considering risk, fuel, time constraints, force packaging, and geospatial routing in under one minute,” said Ohlund. “These machine-generated recommendations were up to 90% faster than traditional methods, with the best in machine-class solutions showing 97% viability and tactical validity.”

For comparison, human performance in generating courses of action typically took around 19 minutes, with only 48% of the options being considered viable and tactically valid.

“This dramatic reduction in time and improvement in the quality of solutions underscores AI’s potential to significantly enhance the speed and accuracy of the decision-making process, while still allowing humans to make the final decisions on the battlefield,” Ohlund added.

The ability to quickly generate multiple viable COAs not only improves the speed of decision-making but also gives commanders more options to work within a compressed time frame, making AI an essential tool for maintaining a strategic advantage in fast-paced combat situations.

Building Trust in AI: From Skepticism to Confidence

Skepticism surrounding the integration of AI in operational decision-making was common at the start of the DASH 3 experiment. However, participating operators saw a notable shift in their perspectives as the DASH progressed. U.S. Air Force First Lt. Ashley Nguyen, 964th Airborne Air Control Squadron DASH 3 participant, expressed initial doubt about the role AI could play in such a complex process. “I was skeptical about technology being integrated into decision-making, given how difficult and nuanced battle COA building can be,” said Nguyen. “But working with the tools, I saw how user-friendly and timesaving they could be. The AI didn’t replace us; it gave us a solid starting point to build from.”

As the experiment unfolded, trust in AI steadily increased. Operators, gaining more hands-on experience, began to see the value in the AI’s ability to generate viable solutions at an unprecedented speed. “Some of the AI-generated outputs were about 80% solutions,” said Nguyen. “They weren’t perfect, but they were a good foundation. This increased my trust in the system; AI became a helpful tool in generating a starting point for decision-making.”

Trust and Collaboration Across Nations

The collaboration between the U.S. and its coalition partners was highlighted throughout the 2025 DASH series. The inclusion of operators from the UK and Canada brought invaluable perspectives, ensuring that the decision support tools tested could address a broad range of operational requirements.

“We understand that the next conflict cannot be won alone without the help of machine teammates and supported by our allies,” said Royal Canadian Air Force Capt. Dennis Williams, RCAF DASH 3 participant. “DASH 3 demonstrated the value of these partnerships as we worked together in a coalition-led, simulated combat scenario. The tools we tested are vital for maintaining a decision advantage, and we look forward to expanding this collaboration in future DASH events.”

This integration of human-machine teaming and coalition participation highlighted the potential for improving multinational interoperability in the command-and-control battlespace. “The involvement of our coalition partners was crucial, not just for the success of DASH 3 but also for reinforcing the alliances that underpin global security. DASH experimentation is intentionally a low barrier for entry from a security classification standpoint, enabling broad participation from allies and coalition partners alike,” said U.S. Air Force Lt. Col. Shawn Finney, commander of the 805th Combat Training Squadron/ShOC-N.

Addressing Challenges: Weather and AI Hallucinations

The DASH 3 experiment was not just a test of new AI tools, but a continuation of a concerted effort to tackle persistent challenges, including the integration of weather data and the potential for AI “hallucinations.” These issues have been focus areas throughout the DASH series, with each iteration bringing new insights and refinements to ensure AI systems are operationally effective.

Weather-related challenges are a critical factor in real-world operations, but due to simulation limitations, they were not fully integrated in the DASH series. Instead, weather-related challenges were manually simulated by human operators through ‘white carding’, a method that provided scenario-based weather effects, such as airfield closures or delays, into the experiment.

“We didn’t overlook the role of weather,” explained Ohlund. “While it wasn’t a primary focus of this experiment, we fully understand its operational impact and are committed to integrating weather data into future decision-making models.”

The risk of AI hallucinations, instances where AI produces incorrect or irrelevant outputs, particularly when using large language models, was another challenge tackled during the DASH 3 experiment. Aware of this potential issue, the development teams took proactive steps to design AI tools that minimized the risk of hallucinations and organizers diligently monitored the outputs throughout the experiment.

“Our team didn’t observe hallucinations during the experiment, underscoring the effectiveness of the AI systems employed during the experiment,” said Ohlund. “While this is a positive outcome, we remain vigilant about the potential risks, particularly when utilizing LLMs that may not be trained on military-specific jargon and acronyms. We are actively refining our systems to mitigate these risks and ensure AI outputs are reliable and relevant.”

Looking Ahead: Building Trust in AI for Future Operations

As the U.S. Air Force moves forward with the 2026 series of DASH experiments, the lessons learned from 2025 iterations will serve as a crucial foundation for future efforts. The growing trust in human-machine collaboration, the strengthening of international partnerships, and the continuous refinement of AI tools all point to a future where AI plays an integral role in operational decision-making.

“The 2025 DASH series has established a strong foundation for future experiments, with the potential to further expand AI’s role in battle management,” said Ohlund. “By continuing to build trust with operators, improve AI systems, and foster international cooperation, the U.S. and its allies are taking critical steps toward ensuring they are prepared to address the evolving challenges of modern warfare.”

“This is just the beginning,” said Williams. “The more we can integrate AI into the decision-making process, the more time we can free up to focus on the human aspects of warfare. These tools are key to staying ahead of our adversaries and maintaining peace and stability on a global scale.”

Deb Henley

505th Command and Control Wing

Public Affairs

Tiberius Aerospace’s GRAIL Assessed “Awardable” for Department of War Work in the CDAO’s Tradewinds Solutions Marketplace

Monday, January 5th, 2026

Tiberius Aerospace, a modern defence technology company built to empower the UK, US and their global allies and partners with next-generation weapon systems and AI-powered solutions, has achieved “Awardable” status for their GRAIL (Generative Real-Time Artificial Intelligence for Lethality) platform through the Chief Digital and Artificial Intelligence Office’s (CDAO) Tradewinds Solutions Marketplace.

The Tradewinds Solutions Marketplace is the premier offering of Tradewinds, the Department of War’s suite of tools and services designed to accelerate the procurement and adoption of AI/ML, data, and analytics capabilities. The Solutions Marketplace Model is fully compliant with the SECWAR Memo entitled “Directing Software Acquisition to Maximize Lethality” (March 6, 2025) and the Executive Order entitled “Modernizing Defense Acquisitions and Spurring Innovation in the Defense Industrial Base” (April 9, 2025).

Tiberius GRAIL is an integrated AI platform designed to transform how defense capabilities are evaluated, acquired, and fielded. The platform includes AI-powered weapon system analysis delivering Cost-Efficient Lethality Scores (CELS) in seconds rather than months; a defense marketplace reducing acquisition timelines from 12+ years to under 2 years; and secure coalition collaboration tools with automated export control enforcement.

Tiberius Aerospace’s video, “GRAIL: The Operating System for Coalition Defense,” is accessible to government customers on the Tradewinds Solutions Marketplace, and demonstrates how GRAIL enables rapid capability evaluation, transparent supplier discovery, and coalition-wide collaboration.

Tiberius Aerospace was recognized among a competitive field of applicants to the Tradewinds Solutions Marketplace whose solutions demonstrated innovation, scalability and potential impact on DoW missions. Government customers interested in viewing the video solution can create a Tradewinds Solutions Marketplace account at tradewindAI.com.

Blythe Crawford CBE, Director GRAIL said, “Having served with 1st Infantry Division in Bagdad, commanded 121 Expeditionary Air Wing and been intimately involved in the rapid development of urgent operational capability in the Pentagon and as Commandant Air and Space Warfare Centre it is clear that to maintain a battle winning edge, defense acquisition must undergo a wholesale transformation to deliver new innovative capability to the warfighter quicker, and GRAIL has been built to accelerate that shift – this is a shift from the analogue to the digital, from bureaucratic waterfall to Silicon Valley-modelled agile – it is not just our tech which has to change in this way, but the means by which we deliver it.” he added, “This recognition from CDAO validates our approach: replacing subjective, years-long procurement processes with objective, AI-powered analysis that gets capability to the warfighter faster. The GRAIL Alliance creates the necessary ecosystem to facilitate this change, and we now have over 100 key defense contractors, OEMs and primes signed up to participate.”

Army Establishes New AI, Machine Learning Career Path for Officers

Sunday, January 4th, 2026

WASHINGTON – The U.S. Army has established a new career pathway for officers to specialize in artificial intelligence and machine learning (AI/ML), formally designating the 49B AI/ML Officer as an official area of concentration. It advances the Army’s ongoing transformation into a data-centric and AI-enabled force.

Full implementation of the new career field will be phased. The first selection of officers will occur through the Army’s Volunteer Transfer Incentive Program (VTIP) beginning January 2026. The officers will be reclassified by the end of fiscal year 2026.

“This is a deliberate and crucial step in keeping pace with present and future operational requirements,” said Lt. Col. Orlandon Howard, U.S. Army spokesperson. “We’re building a dedicated cadre of in-house experts who will be at the forefront of integrating AI and machine learning across our warfighting functions.”

Initially, the 49B AOC will be open to all officers eligible for the VTIP. Those with advanced academic and technical backgrounds in fields related to AI/ML will be particularly competitive candidates. The Army is also exploring expanding this specialized field to include warrant officers in the future.

Officers selected for the 49B AOC will undergo rigorous graduate-level training and gain hands-on experience in building, deploying, and maintaining the Army’s cutting-edge AI-enabled systems. Their primary role will be to operationalize these advanced capabilities across the range of military operations.

The strategic purpose of this new MOS is to provide the Army with a core group of uniformed experts who can accelerate the integration of AI and machine learning. These specialists will apply their talents to a wide range of applications, including:

  • Accelerating battlefield decision-making: Enabling commanders to make faster, more informed decisions in complex environments.
  • Streamlining logistics: Optimizing supply chain and maintenance operations.
  • Supporting robotics and autonomous systems: Fielding and managing the next generation of battlefield robotics.

“Establishing the 49B AI/ML career path is another key investment to maintain our decisive edge as an Army,” said Howard. “Ultimately, it’s about building a force that can outthink, outpace, and outmaneuver any adversary.”

By U.S. Army Communication and Outreach Office

Autonomy in Action: Advancing CBRN Defense Capabilities with Unmanned Systems

Saturday, January 3rd, 2026

Our Nation’s warfighters encounter many known and unknown hazards on the modern battlefield including chemical, biological, radiological, and nuclear (CBRN) threats. Hand-held detection and identification capabilities enhance situational awareness and enable early warning and mitigation, but they can also be time intensive and physiologically burdensome. Additionally, some environments pose too great a risk or are simply inaccessible to warfighters. This is where the use of critical integrated layered CBRN defense assets like autonomous systems comes in.

In CBRN defense, an autonomous system refers to a capability that can independently detect, identify, and/or mitigate CBRN threats by leveraging sensors, robotics, artificial intelligence (AI), and automated decision-making algorithms. The key feature lies in its ability to function independently, acting as an intelligent partner, and keeping the warfighter at a safe distance, therefore enhancing force protection.

Currently, the Capability Program Executive Chemical, Biological, Radiological and Nuclear Defense (CPE CBRND) manages autonomous system efforts including the CBRN Sensor Integration on Robotic Platforms (CSIRP) and the Autonomous Decontamination System (ADS).

CSIRP is a rapid prototyping and fielding effort led by the CPE CBRND’s Joint Project Manager for CBRN Sensors (JPM CBRN Sensors) that focuses on integrating modular CBRN sensor solutions to enhance Unmanned Aircraft Systems (UAS) and Unmanned Ground Vehicles. It exploits advances in sensing, AI, machine learning, autonomy, and communications to enable timely and accurate detection, early warning, and reporting of CBRN hazards, benefiting the warfighter by reducing response times and limiting risk of exposure to CBRN threats.

The CSIRP SkyRaider UAS CBRN Hazard Mapping system is an example of CSIRP in action. The CSIRP SkyRaider UAS is a drone with modular detection equipment or sensors attached that can display CBRN hazard information on mapping, targeting, and communication devices. Once launched from the ground or platform, it is capable of autonomous operation beyond line-of-sight and can complete the programmed mission even through loss of GPS or communications. It is capable of self-navigating to the target, maneuvering in tight spaces, and avoiding obstacles.

Likewise, the ADS program, led by the CPE CBRND’s Joint Project Manager for CBRN Protection (JPM CBRN Protection) will provide increased safety and efficiency of chemical and biological (CB) decontamination operations by utilizing automated, semi-autonomous, and/or autonomous processes to mitigate contamination on critical mission equipment, infrastructure, and terrain. ADS reduces reliance on warfighters’ manual labor and optimizes resource consumption.

To illustrate how these autonomous systems benefit the warfighter and Joint Force mission, imagine a platoon situated in a contested environment. The adversary launches a missile armed with a chemical warfare agent nearby and the dispersal pattern is unpredictable due to the terrain, wind conditions, and the missile’s detonation characteristics. Manned detection slows contamination mapping and poses risk to the Force, so rather than putting warfighters at risk, the platoon leader would deploy the SkyRaider UAS equipped with chemical sensors to quickly self-navigate and assess the broader area. This unmanned, rapid assessment minimizes personnel exposure and enhances force protection by communicating to leaders the timely information needed to make informed decisions. In this case, the platoon leader might deploy an ADS to decontaminate any equipment or areas the platoon will need to traverse, mitigating the risk of exposure to the warfighters through robotic means and reducing the time and logistical burden required to conduct decontamination operations.

Mark Colgan, CSIRP lead systems engineer for JPM CBRN Sensors, states, “Currently, warfighters have to suit up, do their mission, and then decontaminate their protective gear, equipment, vehicles, and more. We can now skip some of those steps by automating the process. They get the same results while remaining safe and completing the mission faster.”

The CSIRP effort is in constant pursuit of advanced sensing capabilities and improvements to leverage autonomy, specifically through its use of algorithms. To keep pace with advancing technologies, JPM CBRN Sensors and JPM CBRN Protection leverage CPE CBRND’s Joint Enterprise Technology Tool (JETT), a web-based platform designed to facilitate communication between the U.S. Government and industry members, for market research and to gain a better understanding of what industry is developing and their focus areas as they relate to program needs. The JPM CBRN Sensors team has utilized JETT to identify and engage with more than a dozen vendors with capabilities relevant to CSIRP. Colgan states, “JETT has proven valuable in answering the questions of ‘What else is out there?’ and ‘What’s coming next?”

This aligns with the Department of War’s Acquisition Transformation Strategy, which, in part, acknowledges that industry often outpaces the Defense Industrial Base and that the Department “must adopt an industry-driven environment for companies to share their product and service offerings to accelerate and scale capability delivery,” as well as “enable industry to better understand the Department’s needs and demonstrate mature products and services early in the acquisition process.”

To date, improvements have included software designed to operate with CPE CBRND’s CBRN Support to Command and Control (CSC2), which integrates CBRN sensor data and information into a common operating picture and provides actionable information to Commanders throughout the battlespace; flight software and sensor-driven algorithms that enable a number of unmanned systems to autonomously team up and relay messages among themselves and with their human counterparts; algorithms that synthesize data; and more.

As it stands, autonomous systems provide a decisive warfighter advantage by performing standoff detection of CBRN threats and critical decontamination functions so the warfighter can focus—at a safe distance—on the larger mission at hand. Looking ahead, AI and technology advancements will continue to optimize the role autonomous systems play in CBRN defense, enabling our warfighters to operate in a CBRN contested environment with more confidence.

By Vashelle Nino CPE CBRND Public Affairs

Army Teams with Industry to Refine AI Potential Supporting Command and Control

Wednesday, December 17th, 2025

ABERDEEN PROVING GROUND, Md. — There are no algorithms in foxholes – yet.

While the U.S. Army has applied emerging artificial intelligence tools to streamline processes across the enterprise — most recently with the rollout of the Department of War’s new generative AI website, GenAI.mil — the impact of AI on the tactical edge Soldier and commander is still taking shape.

With the help of industry experts and Soldier experimentation, however, the Army is building a blueprint for algorithmic warfare at the edge across technology, training, concepts, procurement, and ethical implementation. The potential of AI supporting command and control, C2 — using tools to rapidly process data, inform commanders’ decisions, speed the fires kill chain, and reduce the cognitive burden on Soldiers — is a major focus of ongoing operational prototyping of Next Generation Command and Control, NGC2, the Army’s priority effort to leverage rapid progress in commercial technology to deliver information across all warfighting functions.

The overarching goal of AI for C2, leaders said, is to enable human decisions at machine speed.

“No other technology will have a bigger impact on future warfare than artificial intelligence,” said Brig. Gen. Michael Kaloostian, director of the Command and Control Future Capability Directorate, U.S. Army Transformation and Training Command. “The way we harness and adopt AI to support decision-making, and to make sense of what is expected to be a very chaotic battlefield in the future, will ultimately give commanders options to achieve decision overmatch.”

Applying AI at echelon — designing secure models for austere conditions, tailorable for specific missions and warfighting functions — was the focus of an industry workshop conducted earlier this month by the C2 Future Capability Directorate and Army Contracting Command-Aberdeen Proving Ground.

The market research event, with technical experts from a range of companies and Army organizations, produced feedback on how the Army can better leverage private sector innovation in AI for C2. Areas to maximize industry opportunities and expertise included prioritization of desired capabilities over time, as well as the availability and relevance of Army warfighting and training data that AI models can consume.

“Everybody sees private sector investment happening in AI, so where does the tactical Army fit in the AI market?” said Col. Chris Anderson, project manager Data and AI for Capability Program Executive Command, Control, Communications and Network. “The Army’s unique value proposition for industry is our data and access to warfighters.”

The workshop session also came on the heels of a request for information released on Sam.gov on Dec. 2, focused on gaining industry feedback on the emerging data architecture for NGC2. The Army securely shared the draft architecture on Sam.gov to foster transparency and invite industry ideas that will augment the current NGC2 prototype experimentation and designs underway with vendor teams supporting the 4th Infantry Division and 25th Infantry Division.

“The Army’s approach with Next Generation C2 has always been commercially driven, with industry as foundational partners,” said Joe Welch, portfolio acquisition executive for C2/Counter C2, and Executive Director, T2COM. “That means all of industry — not just our current team leads, but a large range of companies that can contribute to a thriving ecosystem. This RFI is another step in our commitment to sharing technical details and applying industry feedback as we move forward with NGC2.”

One challenge the Army and industry are jointly facing with AI implementation at the edge is that models are only as good as the data they can ingest and interpret. But available data, as well as computing and network resources required to process it, will vary widely depending on the tactical environment.

“For AI at the strategic level, that’s almost entirely unconstrained by store and compute,” Anderson said. “Down at the foxhole, it’s an entirely different story.”

Because of that complexity, the Army is designing the NGC2 ecosystem to rapidly onboard new AI models, building on a common foundation but able to address new missions and environments.

“We’re looking to really provide an ecosystem so that model developers and Soldiers have the capability to fine-tune models at the edge,” Welch said. “When we say that the Army has specific model gaps that we need addressed, it will be a pipeline to very rapidly move that through.”

Another element of the Army’s roadmap is determining what algorithmic warfare capability is required by echelon, from Corps to company and below, informed by the data each unit needs to make decisions, Kaloostian said. The NGC2 prototyping underway with the 4th ID’s Ivy Sting and 25th ID’s Lightning Surge events is providing significant insight into those requirements, as well as the tactics, techniques and procedures for employing different AI applications, he said.

Even as technology and concepts rapidly evolve, the Army will maintain its ethical standards in using AI to support C2 decisions made by humans, leaders said. For example, during the 4ID Ivy Sting series at Fort Carson, Colorado, the division has trained AI models to review sensor data and rapidly recognize, process, and nominate targets. The commander reviews that information and decides whether to order a fire mission. At the staff level, AI can also reduce the time Soldiers spend sifting through and organizing data from a constantly expanding range of data sources and digital systems.

“A lot of what we’re looking to provide here is a reduction in the cognitive burden that comes with the use of a lot of digital tools,” Welch said. “Not just AI target recognition, but generalized AI capabilities are going to help lower that cognitive burden so that our Soldiers can focus on their core tasks to complete the mission.”

By Claire Heininger

Hegseth Introduces Department to New AI Tool

Wednesday, December 10th, 2025

Yesterday, several employees at the Pentagon got a pop-up on their computers inviting them to use a new artificial intelligence tool developed for the War Department. Some were skeptical, wondering if the invitation was part of a cybersecurity test.

But by this morning, those concerns were gone — posters around the Pentagon and an email from Secretary of War Pete Hegseth assured everyone that the new tool is not only legit, but that he wants everybody to start using it. 

“I am pleased to introduce GenAI.mil, a secure generative AI platform for every member of the Department of War,” Hegseth wrote in the email. “It is live today and available on the desktops of all military personnel, civilians and contractors. With this launch, we are taking a giant step toward mass AI adoption across the department. This tool marks the beginning of a new era, where every member of our workforce can be more efficient and impactful.”

Visitors to the site will find that what’s available now is a specialized version of the Google AI tool Gemini, Gemini for Government. This version is approved to handle controlled unclassified information. A green banner at the top of the page reminds users of what can and can’t be shared on the site. 

In addition to Gemini for Government, the site indicates that other American-made frontier AI capabilities will be available soon. 

“There is no prize for second place in the global race for AI dominance,” said Emil Michael, undersecretary of war for research and engineering.

“We are moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America’s next manifest destiny, and we’re ensuring that we dominate this new frontier.” 

Access to the site is available only to personnel with a common access card and who are on the War Department’s nonclassified network.

When GenAI was asked, “How will you help the Department of War achieve its mission,” through a user prompt, it replied with a list of capabilities, including, among other things, creating and refining documents, analyzing information, processing and analyzing satellite images, and even auditing computer code for security purposes.

“I can support the DOW’s mission by providing a range of capabilities designed for a secure, high-impact environment,” GenAI replied. “I am ready to support your mission requirements.”

The tool reminds users to double-check everything it provides to ensure accuracy. The highest authority within the War Department, Hegseth himself, provided that validation. 

“The first GenAI platform capability … can help you write documents, ask questions, conduct deep research, format content and unlock new possibilities across your daily workflows,” he wrote. “I expect every member of the department to log in, learn it and incorporate it into your workflows immediately. AI should be in your battle rhythm every single day; it should be your teammate. By mastering this tool, we will outpace our adversaries.”

For those unfamiliar with how to use AI, online training is available at genai.mil/resources/training.

By C. Todd Lopez, Pentagon News

Aechelon Integrates Vantor’s 3D Operational Terrain into Project Orbion SkyBeam to Enhance ICEYE’s Space-Based SAR AI Capabilities

Monday, December 8th, 2025

Partnership demonstrates ability to transform 24/7, all-weather SAR data into high-fidelity 3D synthetic environments to support time-sensitive missions

SOUTH SAN FRANCISCO, Calif., Dec. 1, 2025 — Aechelon Technology, Inc. (“Aechelon”), the leading provider of advanced geospatial and visual simulation solutions, today announced the successful proof-of-concept integration of Vantor’s 3D operational terrain into Aechelon’s Project Orbion.

This collaboration allows Aechelon to transform ICEYE’s high-resolution space-based synthetic aperture radar (SAR) imagery into high-fidelity, physics-accurate 3D terrain visualizations. The capability will be demonstrated interactively at I/ITSEC 2025 in Orlando, Florida.

Through this partnership, Aechelon’s SkyBeam™ AI exploitation system visually fuses ICEYE’s SAR detections of changes and objects on the ground with Vantor’s highly accurate, global-scale 3D spatial foundation—which is updated continuously to reflect the operational terrain—to create a mission-ready synthetic environment. This integration unlocks 24/7, all-weather updates to Project Orbion’s living 3D environment.

“Aechelon and Vantor have sustained a decades-long partnership—we’ve been continuously integrating Vantor’s high-resolution imagery and advanced 3D data into our products and delivering them at scale across a range of defense programs,” said Nacho Sanz-Pastor, Co-Founder and Chief Executive Officer (CEO) of Aechelon Technology Inc. “This marks the first integration of Vantor’s software-enabled spatial intelligence capabilities with Project Orbion, expanding the roster of industry leaders we’ve brought together to advance next-generation geospatial intelligence.”

Vantor’s 3D spatial foundation includes 3D terrain and 3D building footprints covering over 95% of Earth’s landmass and is accurate to within 3 meters in all dimensions. It is kept continuously up to date by Vantor’s industry-leading imaging satellite constellation—which can revisit the same location on Earth up to 15 times per day—and is delivered to the SkyBeam environment via Vantor’s Tensorglobe™ spatial intelligence platform.

Project Orbion represents the industry’s first AI-enabled Digital Twin of the Earth—a continuously updated, sensor-fused 3D environment that integrates satellite imagery, radar intelligence, photogrammetry, and real-time detections into a single exploitable picture.

The proof of concept highlights Aechelon’s open AI exploitation environment capable of fusing heterogeneous geospatial sources. The system uses Aechelon’s AI algorithms to extract detailed 3D vegetation and process Vantor building footprints for 3D models. Aechelon’s dynamic moving models then correlate ICEYE SAR detections, transforming them into precise, real-time 3D representations against a fused, high-fidelity terrain backdrop.

Aechelon AI also enhances Vantor imagery and elevation data through machine learning–based environmental modeling, including real-time snow accumulation synchronized with ICEYE SAR satellite passes, further refining the visual fidelity and elevation accuracy.

Aechelon’s overall system is enabling operators to interactively visualize moving targets and new structures or others changes with high accuracy.

Together, Aechelon, Vantor, ICEYE, and Project Orbion advance Aechelon’s mission to deliver next-generation geospatial intelligence—providing U.S. and allied forces with correlated, high-fidelity, mission-ready synthetic environments. Beyond defense, the initiative supports disaster response, emergency management, and autonomous AI system training.  

Built on Aechelon’s SkyBeam™ platform, Project Orbion represents the future of Aechelon’s global, continuously updated, sensor-fused geospatial ecosystem. The live demonstration at I/ITSEC 2025 will showcase the quality of the fused 3D environments.

For more details on Project Orbion’s rapid update capabilities, visit aechelon.com/solutions/project-orbion.

Axon Vision Announce Strategic Cooperation Agreement with Leonardo DRS to Deliver AI-Enhanced Counter-UAS Solutions for US Market

Monday, December 8th, 2025

TEL AVIV, Israel, Dec. 3, 2025 — Axon Vision (TASE: AXN) announced today a strategic cooperation agreement with Leonardo DRS to pursue opportunities in advanced situational awareness, lethality, and survivability, with special emphasis on Counter-UAS (C-UAS) solutions in the U.S. defense market. The partnership is expected to address a critical demand for on-platform AI-driven capabilities that support force protection and platform modernization. It positions both companies for expanded participation in large-scale programs seeking proven, scalable solutions.

Under the new Memorandum of Agreement, the cooperation between the companies leverages Leonardo DRS’ deep operational experience, advanced sensors and system integration capabilities along with Axon Vision’s AI-based perception and autonomy technologies. Together, the companies aim to deliver turnkey, next-generation combat systems that deliver low latency, high bandwidth sensor data management, enhance crew awareness, and enable automated threat engagement, focusing on C-UAS. This synergy underscores a shared commitment to enhancing mission effectiveness, protecting lives and delivering technological superiority on the modern battlefield.

The collaboration builds on an ongoing relationship between the two companies and a shared commitment to enhancing mission effectiveness, protecting lives and delivering technological superiority on the modern battlefield. Together the companies have jointly developed and demonstrated operational solutions embedding Axon Vision’s AI-based perception and automation capabilities into Leonardo DRS’ range of integrated multi-spectral, multi-function C-UAS mission packages, which include radar, electro-optical and infrared advanced sensors, rugged AI-ready processors, and both kinetic and non-kinetic effectors.

The most recent demonstration came at the Association of the United States Army’s exposition showcasing unmanned ground vehicle platforms hosting modular, reconfigurable Leonardo DRS mission payloads integrated with Axon Vision’s AI solutions for aerial threat detection & defeat and AI-enhanced smart remote-controlled weapon station capabilities. The combined offering is designed to significantly enhance a platforms’ lethality, autonomous operation, and situational awareness.

“We are excited to solidify our relationship with Axon by integrating advanced mission equipment packages with their AI-driven Counter-UAS and smart RCWS capabilities. It represents a decisive leap forward in manned and unmanned ground combat effectiveness,” said Aaron Hankins, senior vice president and general manager of the Leonardo DRS Land Systems business unit. “By combining real-time aerial threat detection, autonomous or man-on-the-loop engagement, and enhanced situational awareness, this solution delivers unmatched lethality and operational superiority.”

“We are proud to partner with Leonardo DRS, a trusted and highly capable leader in the U.S. defense industry,” said Ido Rozenberg, president, CTO & co-founder of Axon Vision. “This collaboration marks a significant step in bringing Axon Vision’s advanced AI solutions to the U.S. defense market. Together, we are delivering a true powerhouse solution for combat vehicles, combining world-class sensing and integration with cutting-edge AI to enhance lethality, survivability, and overall battlefield dominance.”

Founded in 2017 by entrepreneurs Ido Rozenberg, Raz Roditi, and Michael Zolotov, Axon Vision is a leading provider of AI-based operational systems for the defense market. Its solutions are trusted by the IDF and other military forces worldwide.

For further information, visit www.axon-vision.com.