Go back

Custom LLMs, Ready in Minutes with LLaMa Factory

Logo

Feel free to contact us

Contact Us
About GCP Deployment Guide AWS Deployment Guide Azure Deployment Guide User Guide

Custom LLMs, Ready in Minutes with LLaMa Factory

This virtual machine is configured with LLaMA Factory, a powerful framework designed to simplify large language model fine-tuning and deployment. The VM provides a fully prepared environment for experimenting, training, and serving modern LLMs without the complexity of manual setup.

LLaMA Factory is an open-source framework built on PyTorch that unifies the workflows needed to customize and deploy large language and multimodal models. It brings together training, evaluation, optimization, and inference into a single, consistent toolkit.

Core Capabilities:

  • Extensive Model Compatibility

    Works with a broad range of LLMs and multimodal models such as LLaMA, Qwen, Gemma, Mistral, ChatGLM, Phi, and others, enabling flexibility across use cases.

  • Multiple Fine-Tuning Methods

    Supports full parameter training, LoRA and QLoRA, partial freezing, reward modeling, and preference-based learning methods such as DPO and PPO.

  • Performance-Focused Design

    Includes modern efficiency techniques like optimized attention mechanisms and memory-saving strategies to reduce GPU usage and speed up training.

  • Intuitive User Experience

    Offers both CLI workflows and a browser-based UI where users can manage datasets, adjust training settings, and track progress visually.

Why Choose This LLaMA Factory VM?

  • Everything is preinstalled and configured, removing the friction of dependency management and GPU setup.

  • The VM works equally well for newcomers using the graphical interface and experienced practitioners who prefer scriptable, low-level control.

  • Built for Growth: Highly Scalable for your workload with cloud infrastructure

  • Faster Time to Results: By eliminating setup overhead and providing optimized defaults, this VM helps you focus on building and refining models instead of managing infrastructure.


Disclaimer: Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective owners. We disclaim proprietary interest in the marks and names of others.

NVIDIA License disclaimer:

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license