Using NVIDIA-based Distributed Processing to Speed Mission-Critical AI at the Edge

May 16, 2023

The need for evolving data center-caliber technologies architected to deliver higher performance and enable powerful centralized edge processing is growing exponentially. Greater amounts of data ingest from sensors must be analyzed in real-time to gain actionable insights for decision-making and competitive advantages.

For the first time in the market, there is an optimized network-attached rugged distributed GPU processing system purpose-built for challenging AI edge workloads. Join NVIDIA and Mercury to learn more about how they are:

  • Speeding low-latency, network-attached everything at the edge with disaggregated processing
  • Enabling GPU parallel computing resources via high-speed Ethernet networks without an x86 host
  • Pairing NVIDIA DPUs and GPUs for high-performance applications
  • Designing Rugged Distributed Processing (RDP) servers that reduce SWaP, cost and complexity of deploying GPU servers

WHO WE ARE

FOLLOW US ON SOCIAL MEDIA

Previous Video
Control Data Chaos at the Edge with Next-Gen Processing Solutions, NVIDIA, Military Embedded Systems
Control Data Chaos at the Edge with Next-Gen Processing Solutions, NVIDIA, Military Embedded Systems

Mercury and NVIDIA discuss the latest commercial technologies that alleviate bandwidth challenges by signif...

Next Video
AI at the Tactical Edge: Securing Machine Learning for Multi-Domain Operations
AI at the Tactical Edge: Securing Machine Learning for Multi-Domain Operations

Mercury Systems, NVIDIA and SigmaX are securing machine learning for multi-domain operations with agile edg...