localllm_download_model

localllm_download_model(
    type='gemma-3-270m-it-Q8_0',
    model_dir=None,
    overwrite=False,
    trace=True,
)

Download a local gguf LLM model

Parameters

Name Type Description Default
type str The type/name of the model to download. Currently supports the following models. You can either give the model name or localllm/{model_name} - “gemma-3-270m-it-qat-Q4_0”: Google Gemma 3 270M it model (Q4_0 quantization) - “gemma-3-270m-it-Q8_0”: Google Gemma 3 270M it model (Q8_0 quantization) - “gemma-3-1b-it-Q8_0”: Google Gemma 3 1B it model (Q8_0 quantization) - “gemma-3-4b-it-qat-Q4_0”: Google Gemma 3 4b it model (Q4_0 quantization - with quantized aware training) - “gemma-3-4b-it-Q4_K_M”: Google Gemma 3 4b it model (Q4_K_M quantization) - “gemma-3-12b-it-qat-Q4_0”: Google Gemma 3 12B it model (Q4_0 quantization) - “GLM-4.6V-Flash-Q4_K_M”: GLM 4.6V Flash model (Q4_K_M quantization) - “translategemma-4b-it-q8_0”: TranslateGemma 4B it model (Q8_0 quantization) - “translategemma-12b-it-q4_k_m”: TranslateGemma 12B it model (Q4_K_M quantization) - “LFM2.5-350M-Q8_0”: LFM2.5 350M model (Q8_0 quantization) - “LFM2.5-1.2B-Instruct-Q4_K_M”: LFM2.5 1.2B Instruct model (Q4_K_M quantization) - “LFM2.5-1.2B-Instruct-Q8_0”: LFM2.5 1.2B Instruct model (Q8_0 quantization) - “Qwen3-4B-Instruct-Q4_K_M”: Qwen 3 4B model (Q4_K_M quantization) - “Qwen3-8B-Q4_K_M”: Qwen 3 8B model (Q4_K_M quantization) - “Qwen3.5-0.8B-Q8_0”: Qwen 3.5 0.8B model (Q8_0 quantization) - “Qwen3.5-2B-Q4_K_M”: Qwen 3.5 2B model (Q4_K_M quantization) - “Qwen3.5-4B-Q4_K_M”: Qwen 3.5 4B model (Q4_K_M quantization) - “Qwen3.5-9B-Q4_K_M”: Qwen 3.5 9B model (Q4_K_M quantization) - “gemma-4-E2B-it-Q8_0”: Google Gemma 4 E2B Instruct model (Q8_0 quantization) - “gemma-4-E2B-it-Q4_K_M”: Google Gemma 4 E2B Instruct model (Q4_K_M quantization) - “gemma-4-E4B-it-Q8_0”: Google Gemma 4 E4B Instruct model (Q8_0 quantization) - “gemma-4-E4B-it-Q4_K_M”: Google Gemma 4 E4B Instruct model (Q4_K_M quantization) "gemma-3-270m-it-Q8_0"
model_dir str or None Directory where the model should be stored. If None, uses the path set in environment variable LOCALLLM_MODEL_DIR and if that environment variable does not exist, uses the default directory in the user’s home folder (~/.localllm/models/). None
overwrite bool If True, re-download the model even if it already exists. If False, skip download if the model file is already present. False
trace bool If True, shows download progress. True

Returns

Name Type Description
str Path to the downloaded model file.

Examples

>>> from localllm import localllm_download_model
>>> model_path = localllm_download_model("gemma-3-270m-it-Q8_0")
>>> model_path = localllm_download_model("gemma-3-270m-it-Q8_0", overwrite=True)
Downloading...
>>> model_path = localllm_download_model("gemma-3-270m-it-Q8_0", overwrite=False)
>>> model_path = localllm_download_model("not-existing-model")
>>>
>>> import os
>>> path = os.getcwd()
>>> model_path = localllm_download_model(model_dir=path)
Downloading...
>>> ## More models
>>> os.remove(model_path)  # Clean up downloaded file
>>> model_path = localllm_download_model("gemma-3-270m-it-qat-Q4_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("gemma-3-1b-it-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("gemma-3-4b-it-qat-Q4_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("gemma-3-4b-it-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("gemma-3-12b-it-qat-Q4_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("GLM-4.6V-Flash-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("translategemma-4b-it-q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("translategemma-12b-it-q4_k_m", overwrite=True, trace = False)
>>> os.remove(model_path)    
>>> model_path = localllm_download_model("LFM2.5-350M-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)        
>>> model_path = localllm_download_model("LFM2.5-1.2B-Instruct-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("LFM2.5-1.2B-Instruct-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("Qwen3-4B-Instruct-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)    
>>> model_path = localllm_download_model("Qwen3-8B-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("Qwen3.5-0.8B-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("Qwen3.5-2B-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("Qwen3.5-4B-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("Qwen3.5-9B-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)
>>> model_path = localllm_download_model("gemma-4-E2B-it-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path) 
>>> model_path = localllm_download_model("gemma-4-E2B-it-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)     
>>> model_path = localllm_download_model("gemma-4-E4B-it-Q8_0", overwrite=True, trace = False)
>>> os.remove(model_path)     
>>> model_path = localllm_download_model("gemma-4-E4B-it-Q4_K_M", overwrite=True, trace = False)
>>> os.remove(model_path)            

Notes

The function creates the target directory if it doesn’t exist. Progress is displayed during download.