Webinar: Using NVIDIA-Based Distributed Processing to Speed Mission-Critical AI at the Edge

August 30, 2022

The need for evolving data center-caliber technologies architected to deliver higher performance and enable powerful centralized edge processing is growing exponentially. Greater amounts of data ingest from sensors must be analyzed in real-time to gain actionable insights for decision-making and competitive advantages.

For the first time in the market, there is an optimized network-attached rugged distributed GPU processing system purpose-built for challenging AI edge workloads. Join NVIDIA and Mercury to learn more about how they are:

  • Speeding low-latency, network-attached everything at the edge with disaggregated processing
  • Enabling GPU parallel computing resources via high-speed Ethernet networks without an x86 host
  • Pairing NVIDIA DPUs and GPUs for high-performance applications
  • Designing Rugged Distributed Processing (RDP) servers that reduce SWaP, cost and complexity of deploying GPU servers

Previous Asset
White Paper: Enabling big data processing and AI-powered everything, everywhere
White Paper: Enabling big data processing and AI-powered everything, everywhere

Learn how Mercury and Intel collaborate to scale and deploy composable data center capabilities across the ...

Next Video
Vodcast: Enabling the Army to get higher performance and SWaP optimization in a smaller footprint
Vodcast: Enabling the Army to get higher performance and SWaP optimization in a smaller footprint

Live from the International Armored Vehicle Show in Austin, Texas, discover how Mercury is enabling the #ar...

×

Interested in our technology? Contact Sales

First Name
Last Name
Company
Country
State
Phone Number
Comments or Inquiry
Thank you, we will get back to you soon!
Error - something went wrong!
×

Please register to view this content

First Name
Last Name
Company
Job Title
Country
State
Opt me in to receive communications from Mercury Systems
Thank you
Error - something went wrong!