NVIDIA

Accelerate Artificial Intelligence at the Edge

Mercury delivers the latest data center technologies to accelerate artificial intelligence (AI) applications at the edge. Powered by NVIDIA's latest graphics processing units (GPUs), our processing subsystems ensure mission success by enabling real-time critical decision-making in the field, scaling AI applications from cloud to edge.

NVIDIA_PreferredPartner_ForDarkBackgrounds_RGB - 200.png

podcast

Accelerating AI at the Edge for Large-Scale Analytics

Tune into our podcast

blog

Taking the Data Center to the Edge with High-Performance Embedded Edge Computing

Read Blog Post

Webinar

AI at the Tactical Edge: Securing Machine Learning for Multi-Domain Operations

Watch on demand

AI is the future. Learn how Mercury & NVIDIA are making it possible today. 

CHALLENGE

Real-time decision-making at the edge

SOLUTION

High-performance computing with RES AI servers and OpenVPX GSC6204 modules can process data in the field, at the very edge or quickly transmit data to the cloud to obtain actionable insights.  

DOWNLOAD NVIDIA GPU OPTIONS FOR YOUR SOLUTIONS

Handle Massive AI Workloads with NVIDIA GPUs


With early access to new technologies, Mercury aligns product development with NVIDIA's roadmap to provide the most advanced GPU accelerators. We support NVIDIA software development kits including Metropolis which optimizes AI video analytics and JetPack, DeepStream, and TensorRT that enhance video and camera inferencing.

T4-GPU.jpg

NVIDIA T4 GPU

The T4 GPU accelerator offers the performance of up to 100 CPUs in an energy-efficient 70-watt, PCIe form factor. 

A100-GPU.jpg

NVIDIA A100 GPU

The A100 is the world's first GPU-SmartNIC combined accelerator. Data is analyzed and received at the same time delivering up to 20x higher performance over the previous generation.

Quadro-rtx500-chip.jpg

Embedded TU104 GPU

The TU104 incorporates NVIDIA’s NVLink high-speed interconnect to deliver high-bandwidth, low-latency connectivity between pairs of GPUs, sharing memory capacity and splitting the workload.

Repsonse-to-insights.jpg

Respond to insights, signals and threats faster

Edge-ready GPU systems integrate the latest commercial technologies to keep pace with evolving AI algorithms, collecting and processing data in the field instantaneously and eliminating the need to transfer to the cloud. 

Increased-cloud-response.jpg

Increase cloud response times and simplify management

Supercomputing systems reduce latency by supporting the latest high-speed networks that adhere to open standards, simplifying integration and allowing you to build an agile IT infrastructure that extends from the cloud to the edge.

mitigate-cyber.jpg

Mitigate cyber and supply-chain threats

Mercury's secure solutions are designed and manufactured in defense microelectronics activity (DMEA)-accredited, IPC-1791 U.S. facilities with a traceable and managed supply chain, and can be customized with security features to safeguard critical data and IP.

Sustain-AI.jpg

Rapidly & cost-effectively launch, incorporate and sustain AI

Modular and open-system approach promotes rapid innovation with easily upgradable components that don't require total system replacement. And our global team is available to support and help sustain your AI system up-time no matter where you operate. 

REDUCE YOUR FOOTPRINT WITH THE DENSEST GPU SYSTEMS IN THE MARKET

Rack Servers

Rack Servers

Compact, rugged and power-efficient 1U-4U rackmount servers that tackle mission-critical workloads with cutting-edge data center technologies

SEE MORE

Common Module System X08 Blade Servers

Common Module System X08 Blade Servers

1U-3U multi-role, open standards-based server

SEE MORE

GPGPU, Graphics and Video Boards

GPGPU, Graphics and Video Boards

Rugged GPGPU, video and graphics Boards to capture real-time video and display high-resolution graphics

SEE MORE

Rugged Data Storage System (RDS)

Rugged Data Storage System (RDS)

Data center-class, rugged all-flash network-attached storage (NAS) system for complete data access over networking connection at low-latency, direct-attached speeds.

SEE MORE