# ServingMLFastCelery **Repository Path**: quminzi/ServingMLFastCelery ## Basic Information - **Project Name**: ServingMLFastCelery - **Description**: FastAPI + Celery for model deploy. https://towardsdatascience.com/deploying-ml-models-in-production-with-fastapi-and-celery-7063e539a5db - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-07-08 - **Last Updated**: 2023-04-20 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # ServingMLFastCelery Working example for serving a ML model using FastAPI and Celery. ## Usage **Install requirements:** ```bash pip install -r requirements.txt ``` **Set environment variables:** * MODEL_PATH: Path to pickled machine learning model * BROKER_URI: Message broker to be used by Celery e.g. RabbitMQ * BACKEND_URI: Celery backend e.g. Redis ```bash export MODEL_PATH=... export BROKER_URI=... export BACKEND_URI=... ``` **Start API:** ```bash uvicorn app:app ``` **Start worker node:** ```bash celery -A celery_task_app:worker worker -l info ```