Webinar: Using NVIDIA-Based Distributed Processing to Speed Mission-Critical AI at the Edge

August 30, 2022

The need for evolving data center-caliber technologies architected to deliver higher performance and enable powerful centralized edge processing is growing exponentially. Greater amounts of data ingest from sensors must be analyzed in real-time to gain actionable insights for decision-making and competitive advantages.

For the first time in the market, there is an optimized network-attached rugged distributed GPU processing system purpose-built for challenging AI edge workloads. Join NVIDIA and Mercury to learn more about how they are:

  • Speeding low-latency, network-attached everything at the edge with disaggregated processing
  • Enabling GPU parallel computing resources via high-speed Ethernet networks without an x86 host
  • Pairing NVIDIA DPUs and GPUs for high-performance applications
  • Designing Rugged Distributed Processing (RDP) servers that reduce SWaP, cost and complexity of deploying GPU servers

Previous Article
Webinar: Learn About Conformance to the SOSA™ Technical Standard
Webinar: Learn About Conformance to the SOSA™ Technical Standard

Listen to experts involved in the SOSA Consortium and the standard’s development discuss the elements of co...

Next Article
Webinar: Technology advancements in data storage and system interfaces for high-speed data recording
Webinar: Technology advancements in data storage and system interfaces for high-speed data recording

Use the latest technology that allow real-time recording systems to keep up with the ever-increasing speed ...

×

Please register to view this content

First Name
Last Name
Company
Job Title
Country
State
Opt me in to receive communications from Mercury Systems
Thank you
Error - something went wrong!