NVIDIA GPUs Power Meta’s Next-Gen Llama 3 Model, Optimized AI Across All Platforms Including RTX

Hassan Mujtaba

NVIDIA has announced that Meta's Llama 3 LLMs were built with NVIDIA GPUs and are optimized to run across all platforms, from servers to PCs.

Meta's Next-Gen Llama 3 AI LLMs Are Here & NVIDIA Is The Driving Force Behind Them, Optimized Support Across Cloud, Edge & RTX PCs

Press Release: NVIDIA today announced optimizations across all its platforms to accelerate Meta Llama 3, the latest generation of the large language model (LLM). The open model combined with NVIDIA accelerated computing equips devs, researchers, and businesses to innovate responsibly across a wide variety of applications.

Related Story NVIDIA Unveils Revamped ChatRTX With Improved LLM Support, Image Search & Speech Recognition

Trained on NVIDIA AI

Meta engineers trained Llama 3 on a computer cluster packing 24,576 H100 Tensor Core GPUs, linked with an Quantum-2 InfiniBand network. With support from NVIDIA, Meta tuned its network, software, and model architectures for its flagship LLM.

To further advance the state of the art in generative AI, Meta recently described plans to scale its infrastructure to 350,000 H100 GPUs.

Putting Llama 3 to Work

Versions of Llama 3, accelerated on NVIDIA GPUs, are available today for use in the cloud, data center, edge, and PC.

Image Source: Wccftech (AI-Generated)

Businesses can fine-tune Llama 3 with their data using NVIDIA NeMo, an open-source framework for LLMs that’s part of the secure, supported NVIDIA AI Enterprise platform. Custom models can be optimized for inference with NVIDIA TensorRT-LLM and deployed with Triton Inference Server.

Taking Llama 3 to Devices and PCs

Llama 3 also runs on Jetson Orin for robotics and edge computing devices, creating interactive agents like those in the Jetson AI Lab. What’s more, RTX and GeForce RTX GPUs for workstations and PCs speed inference on Llama 3. These systems give developers a target of more than 100 million NVIDIA-accelerated systems worldwide.

Get Optimal Performance with Llama 3

Best practices in deploying an LLM for a chatbot involve a balance of low latency, good reading speed, and optimal GPU use to reduce costs. Such a service needs to deliver tokens — the rough equivalent of words to an LLM — at about twice a user’s reading speed which is about 10 tokens/second.

meta-llama-3-llm-ai-_1
meta-llama-3-llm-ai-_2
meta-llama-3-llm-ai-_3

Applying these metrics, a single NVIDIA H200 Tensor Core GPU generated about 3,000 tokens/second — enough to serve about 300 simultaneous users — in an initial test using the version of Llama 3 with 70 billion parameters. That means a single NVIDIA HGX server with eight H200 GPUs could deliver 24,000 tokens/second, further optimizing costs by supporting more than 2,400 users at the same time.

For edge devices, the version of Llama 3 with eight billion parameters generated up to 40 tokens/second on Jetson AGX Orin and 15 tokens/second on Jetson Orin Nano.

Advancing Community Models

An active open-source contributor, NVIDIA is committed to optimizing community software that helps users address their toughest challenges. Open-source models also promote AI transparency and let users broadly share work on AI safety and resilience.

Share this story

Deal of the Day

Comments