# llama **Repository Path**: mirrors_intel/llama ## Basic Information - **Project Name**: llama - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-08-08 - **Last Updated**: 2025-10-04 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # llama Llama ${project.version} Llama is a Yarn Application Master that mediates the management and monitoring of cluster resources between Impala and Yarn. Llama provides a Thrift API for Impala to request and release allocations outside of Yarn-managed container processes. For details on how to build Llama refer to the BUILDING.txt file. For details on how to use Llama please refer to Llama documentation.