# RETuning **Repository Path**: other_50/RETuning ## Basic Information - **Project Name**: RETuning - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-01 - **Last Updated**: 2026-03-01 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
📖Paper | 📊Dataset | 📦Collection | 🤖Weights | 🐙GitHub
 ## 🔔 News - We are planning to release full Fin-2025. ⭐ Star the repo & Stay tuned! - **`Nov. 23, 2025`: The training dataset for RL (middle difficulty), as well as full 200k Fin-2024, has been released on 🤗 HuggingFace: [RETuning](https://huggingface.co/datasets/linxy/RETuning).** - **`Nov. 13, 2025`: Evaluation and SFT dataset is released on 🤗 HuggingFace: [RETuning](https://huggingface.co/datasets/linxy/RETuning).** - **`Nov. 11, 2025`: We released the model weights on 🤗 HuggingFace: [DeepSeek_R1_14B_SFT](https://huggingface.co/linxy/RETuning-DeepSeek_R1_14B_SFT), [DeepSeek_R1_14B_SFT_GRPO](https://huggingface.co/linxy/RETuning-DeepSeek_R1_14B_SFT_GRPO), [DeepSeek_R1_32B_SFT](https://huggingface.co/linxy/RETuning-DeepSeek_R1_32B_SFT), [DeepSeek_R1_32B_SFT_GRPO](https://huggingface.co/linxy/RETuning-DeepSeek_R1_32B_SFT_GRPO).** - **`Oct. 24, 2025`: We upload the preprint to [arXiv](https://arxiv.org/abs/2510.21604).** ## 📖 Findings **Up/Down movements are much more difficult for LLMs to predict.**  **RETuning enables LLMs to benefit from inference-time scaling in stock movement prediction.**  **Most LLMs are bounded by random guessing in stock movement prediction.**  ## 🚀 Quick Start Python>=3.8 and PyTorch>=1.8 are required. ```bash git clone https://github.com/LinXueyuanStdio/RETuning.git cd RETuning pip install -r requirements.txt ``` SFT stage: ```bash bash pipeline/sft/cold_start_dsr1_14b.sh # for DeepSeek_R1_14B bash pipeline/sft/cold_start_dsr1_32b.sh # for DeepSeek_R1_32B ``` RL stage: ```bash bash pipeline/rl/train_dsr1_14b.sh # for DeepSeek_R1_14B bash pipeline/rl/train_dsr1_32b.sh # for DeepSeek_R1_32B ``` Evaluation: ```bash bash pipeline/evaluation/evaluate_14b.sh # for DeepSeek_R1_14B bash pipeline/evaluation/evaluate_32b.sh # for DeepSeek_R1_32B ``` ## 📊 Dataset  Prompt length distribution:  ## 🤝 Citation Please consider citing this paper if you use the ```code``` or ```data``` from our work. Thanks a lot :) (`Xueyuan et al., 2023` preferred, instead of `Lin et al., 2023`) ```bibtex @article{lin2025retuning0, title = {RETuning: Upgrading Inference-Time Scaling for Stock Movement Prediction with Large Language Models}, author = {Xueyuan Lin and Cehao Yang and Ye Ma and Ming Li and Rongjunchen Zhang and Yang Ni and Xiaojun Wu and Chengjin Xu and Jian Guo and Hui Xiong}, year = {2025}, journal = {arXiv preprint arXiv: 2510.21604} } ``` --- RETuning is released under the [MIT](https://www.apache.org/licenses/LICENSE-2.0) license.