SureFire

Mobile battlefield Devices Show Great Potential Thanks to Army Research

ADELPHI, Md. — Soldiers on the battlefield are not able to rely on high-powered bulky devices or the cloud to conduct operations, so how can they efficiently run the programs and algorithms needed to be successful in their missions?

A collaborative effort between Army researchers has resulted in a tool that will enable the Army to model, characterize and predict the performance of current and future machine learning-based applications on mobile devices, enabling the deployment of advanced analytics to the tactical edge to support Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance operations.

This research is being conducted by Dr. Kevin Chan from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, Pennsylvania State University and IBM, a collaborative effort made possible by the lab’s Network Science Collaborative Technology Alliance that is slated to conclude this year after a 10-year run.

The researchers detail their achievements in papers recently accepted to the Institute of Electrical and Electronics Engineers Transactions on Mobile Computing titled Augur: Modeling the Resource Requirements of ConvNets on Mobile Devices and to the IEEE/ACM Transactions on Networking titled NetVision: On-demand Video Processing in Wireless Networks.

This research studies how convolutional neural networks on mobile devices such as smartphones are being used for various applications like object detection, language translation and audio classification, Chan said.

“Given the rapid advances and development of artificial intelligence and machine learning techniques, most of the research in deep learning is studied using devices or platforms that have a lot more resources to include processing, energy and storage, and commercial applications use the cloud for some of these complex computations,” Chan said. “As a result, there’s a great deal of uncertainty in the performance and resource requirements of these algorithms on mobile devices, for instance if they’ll take forever to run or use up all of the battery.”

The researchers profiled several different commonly used deep learning algorithms on numerous different current mobile computing platforms, including smartphones and mobile graphics processing units, and characterized how they performed.

The primary collaborator of this work was Professor Thomas La Porta, director, School of Electrical Engineering and Computer Science, and Evan Pugh Professor and William E. Leonhard Professor at Pennsylvania State University.

“We characterized the runtime, memory usage and energy usage of these platforms, whereas typical studies are concerned with runtime and performance,” La Porta said. “The edge analytics requires us to study how these algorithms work on mobile devices. Obviously, commercial applications and vendors are interested in having applications work on smartphones, but they can more readily go to the cloud for help.”

With this, the researchers developed a tool called Augur that is able to predict the performance and resource usage of future algorithms on future mobile devices.

“The result of this research can readily be used on future generations of algorithms and mobile devices,” Chan said.

Understanding how these applications/algorithms work on mobile devices such as tablets, head-mounted displays and handhelds will be crucial towards enabling (or for) analytics at the edge, he said.

Further, the research also shows how the analytics can run on mobile devices, and how these operations can leverage other more capable computing platforms deployed near the tactical edge to support the complex analytics.

“Tactical networks have proposed the deployment of such capabilities called microclouds, for example server class machines in the back of humvees,” Chan said. “The work on NetVision employs tactical microcloud capabilities in which mobile edge devices offload (parts of) the analytics workflow to these devices to speed up processing of the data.”

Chan stated the approach finds optimal processing of the data between the mobile and microcloud computing resources as it still has to deal with a limited bandwidth network to transfer the data.

“The Army will want to employ the latest AI&ML capabilities,” Chan said. “As algorithms and the devices running them improve, it will be important to understand what can run and what sort of performance to expect.”

For Chan, having this work published in an IEEE journal is a huge accomplishment.

“ToN and TMC is an indication that the work is high-quality and well-regarded,” Chan said. “In our field, these are considered as the top-tier journals in which we aim for our research to be published. Earlier versions of this work was published at the 25th ACM International Conference on Multimedia and the Conference on Communications and Networks, which are both highly-rated networking computer science conference and an accomplishment on their own.”

This work was specifically performed within the NSCTA under the distributed video analytics task, and NetVision, in particular, was shown at the NSCTA Expo as a research highlight of the Quality of Information — Semantically Adaptive Networks thrust area.

“As a result of the second half of the program, we had a research task on video analytics,” Chan said. “This research, a collaboration with Penn State and IBM was very productive, enabling CCDC ARL to work with academic and industrial partners, both world-class researchers. This highly-collaborative research leveraged diverse technical expertise – even shared equipment!”

Chan stated that this project and all research conducted under the NSCTA is crucial as the Army continues to develop science and technology for the future fight.

Since the Army has identified communications and networks as a critical capability towards current and future operations, stated Chan, researchers must consider how networked systems behave.

“The concept of multi domain operations implies that operational domains are inherently interconnected,” Chan said. “The Army must understand and develop new technology and capabilities to enable a new way of operations. This will require, for example, understanding on how to execute multi domain command and control, and to create situational awareness through exchange of information across and within operational domains. ARL’s research in network science has resulted in advancement in the state-of-the-art of these capabilities to support multi domain operations for a variety of the Army’s functions.”

For La Porta, this collaboration and research established a foundation for great things to come.

“This work was a valuable building block that allowed us as academic partners to build even deeper collaboration with CCDC ARL and develop systems and algorithms that allow for very fast object and action recognition in videos that are stored on mobile cameras,” La Porta said.

Looking to the future, laboratory officials said they will continue to engage the CCDC C5ISR (Command, Control, Computers, Communications, Cyber, Intelligence, Surveillance and Reconnaissance) Center and the U.S. Army Futures and Concepts Center to best understand where this research can be transitioned to get it one step closer to a Soldier’s hands.

By US Army CCDC Army Research Laboratory Public Affairs

Comments are closed.