# DeepSeek-R1-4bit **Repository Path**: zc2020/DeepSeek-R1-4bit ## Basic Information - **Project Name**: DeepSeek-R1-4bit - **Description**: Mirror of https://huggingface.co/mlx-community/DeepSeek-R1-4bit - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 2 - **Created**: 2025-02-10 - **Last Updated**: 2025-02-10 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README --- base_model: deepseek-ai/DeepSeek-R1 tags: - mlx --- # mlx-community/DeepSeek-R1-4bit The Model [mlx-community/DeepSeek-R1-4bit](https://huggingface.co/mlx-community/DeepSeek-R1-4bit) was converted to MLX format from [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) using mlx-lm version **0.21.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/DeepSeek-R1-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```