# RETuning **Repository Path**: other_50/RETuning ## Basic Information - **Project Name**: RETuning - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-01 - **Last Updated**: 2026-03-01 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README

RETuning logo RETuning logo RETuning logo

RETuning: Upgrading Inference-Time Scaling for Stock Movement Prediction with Large Language Models

arXiv Dataset Collection Weights MIT License

Xueyuan Lin1,2,3,*, Cehao Yang1,2,*, Ye Ma3, Ming Li3, Rongjunchen Zhang3, Yang Ni1, Xiaojun Wu1,2, Chengjin Xu2,4, Jian Guo2,†, Hui Xiong1,†

1The Hong Kong University of Science and Technology (Guangzhou), 2IDEA Research, 3Hithink RoyalFlush Information Network Co., Ltd, 4DataArc Tech Ltd
*Equal contribution, Corresponding author

📖Paper | 📊Dataset | 📦Collection | 🤖Weights | 🐙GitHub

![method](./assets/method.png) ## 🔔 News - We are planning to release full Fin-2025. ⭐ Star the repo & Stay tuned! - **`Nov. 23, 2025`: The training dataset for RL (middle difficulty), as well as full 200k Fin-2024, has been released on 🤗 HuggingFace: [RETuning](https://huggingface.co/datasets/linxy/RETuning).** - **`Nov. 13, 2025`: Evaluation and SFT dataset is released on 🤗 HuggingFace: [RETuning](https://huggingface.co/datasets/linxy/RETuning).** - **`Nov. 11, 2025`: We released the model weights on 🤗 HuggingFace: [DeepSeek_R1_14B_SFT](https://huggingface.co/linxy/RETuning-DeepSeek_R1_14B_SFT), [DeepSeek_R1_14B_SFT_GRPO](https://huggingface.co/linxy/RETuning-DeepSeek_R1_14B_SFT_GRPO), [DeepSeek_R1_32B_SFT](https://huggingface.co/linxy/RETuning-DeepSeek_R1_32B_SFT), [DeepSeek_R1_32B_SFT_GRPO](https://huggingface.co/linxy/RETuning-DeepSeek_R1_32B_SFT_GRPO).** - **`Oct. 24, 2025`: We upload the preprint to [arXiv](https://arxiv.org/abs/2510.21604).** ## 📖 Findings **Up/Down movements are much more difficult for LLMs to predict.** ![findings](./assets/difficulty_distribution.png) **RETuning enables LLMs to benefit from inference-time scaling in stock movement prediction.** ![findings](./assets/scaling_from_RETuning.png) **Most LLMs are bounded by random guessing in stock movement prediction.** ![findings](./assets/baselines_wo_CoT.png) ## 🚀 Quick Start Python>=3.8 and PyTorch>=1.8 are required. ```bash git clone https://github.com/LinXueyuanStdio/RETuning.git cd RETuning pip install -r requirements.txt ``` SFT stage: ```bash bash pipeline/sft/cold_start_dsr1_14b.sh # for DeepSeek_R1_14B bash pipeline/sft/cold_start_dsr1_32b.sh # for DeepSeek_R1_32B ``` RL stage: ```bash bash pipeline/rl/train_dsr1_14b.sh # for DeepSeek_R1_14B bash pipeline/rl/train_dsr1_32b.sh # for DeepSeek_R1_32B ``` Evaluation: ```bash bash pipeline/evaluation/evaluate_14b.sh # for DeepSeek_R1_14B bash pipeline/evaluation/evaluate_32b.sh # for DeepSeek_R1_32B ``` ## 📊 Dataset ![dataset](./assets/dataset.png) Prompt length distribution: ![prompt_length](./assets/prompt_length_distribution.png) ## 🤝 Citation Please consider citing this paper if you use the ```code``` or ```data``` from our work. Thanks a lot :) (`Xueyuan et al., 2023` preferred, instead of `Lin et al., 2023`) ```bibtex @article{lin2025retuning0, title = {RETuning: Upgrading Inference-Time Scaling for Stock Movement Prediction with Large Language Models}, author = {Xueyuan Lin and Cehao Yang and Ye Ma and Ming Li and Rongjunchen Zhang and Yang Ni and Xiaojun Wu and Chengjin Xu and Jian Guo and Hui Xiong}, year = {2025}, journal = {arXiv preprint arXiv: 2510.21604} } ``` --- RETuning is released under the [MIT](https://www.apache.org/licenses/LICENSE-2.0) license.

(back to top)