This tutorial explains how to download HuggingFace models on Glows.ai, using two available methods: Glows.ai Datadrive storage (local download and upload), and instance-based storage (download directly inside an instance).
- Glows.ai Datadrive storage: Data can be read and written without limits as long as your Storage Space Plan remains valid. Download speed depends on your local network, making this option suitable for users who frequently need to download the same data (e.g., for model serving).
- Instance-based storage: Data is valid only while the instance is running; once the instance is released, the data will be deleted. Because it shares the bandwidth of the instance’s data center, download speeds are fast. This method is suitable for users who need the data for one-time use (e.g., testing model performance).
Glows.ai Datadrive Storage
This method saves data to Glows.ai Datadrive. Make sure your Space Storage plan provides enough capacity, and allocate that capacity to the DataDrive in the region you intend to use.
Allocate Storage
Suppose you need to download a model of 65GB and plan to use an NVIDIA GeForce RTX 4090 GPU in the TW-03 region. You’ll first need to go to Storage Space and purchase a 100GB storage package.

Then, click the Modify button in the Storage Space interface to allocate 70GB of space to the TW-03 region Datadrive.

Datadrive Client Download
The Data Drive client currently supports downloading models directly from HuggingFace to the corresponding Datadrive of different region. The process works as follows: using your local network, the client downloads HuggingFace model chunks locally, then synchronizes them to the Datadrive.
- Install the Data Drive client: Download here
- Follow the tutorial: Download models from HuggingFace
Instance-based Storage
Create an Instance
This method requires creating an instance on Glows.ai. Suppose you would use an NVIDIA GeForce RTX 4090 GPU in the TW-03 region, with the environment CUDA 12.8 Torch 2.8.0 Base.

Once the instance is created, you can connect to it via SSH or HTTP Port 8888 (JupyterLab).

Download Model Using Commands
JupyterLab is simple to use. The following example demonstrates operations within JupyterLab. Open a new Terminal.

Enter the following command to install HuggingFace’s official model management tool huggingface_hub:
pip install -U huggingface_hub

Once installed, you can use the hf command to download model files directly to the instance.
For example, to download openai/gpt-oss-20b into the /gpt-oss-20b directory, use:
hf download openai/gpt-oss-20b --local-dir /gpt-oss-20b

Running HuggingFace Models on Glows.ai
Some frameworks support directly loading and running HuggingFace models, such as Transformers, SGLang, and GPUStack. You can use the software you’re most familiar with for deployment or refer to the tutorials below:
- How to run DeepSeek-R1 on multiple machines with multiple GPUs using SGLang on Glows.ai
- How to Run GPUStack on Glows.ai
HuggingFace official website also provides usage examples. If you have any questions or suggestions during your implementation on Glows.ai, feel free to contact us through the channels listed in Contact Us.

Contact Us
If you have any questions or suggestions while using Glows.ai, feel free to contact us via email, Discord, or Line.
Email: support@glows.ai
Discord: https://discord.com/invite/glowsai
Line: https://lin.ee/fHcoDgG