# gpu-info **Repository Path**: yangkunjmd/gpu-info ## Basic Information - **Project Name**: gpu-info - **Description**: 显卡通用信息获取工具+接口规范 - **Primary Language**: C++ - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 5 - **Created**: 2023-04-03 - **Last Updated**: 2023-04-03 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # gpu-info 说明 ## 概要 gpu-info 项目材料包含3个部分: - 一个是gpu-info(gpu-info文件夹) - 定义了gpu信息获取的接口以及实现,其中include文件夹中的gpu.h为接口头文件,src中为具体的现实代码,该现实采用了OpenGL扩展的方法; - 一个是gpu-info-get(gpu-info-get文件夹),该部分提供了上述接口的调用例子; - 最后一部分是本文档。 ## demo程序编译+使用 ### build ``` mkdir build & cd build cmake .. make ``` ### run ``` ./GPU-info-get ``` ## gpu-info接口说明 - 接口头文件为gpu.h 定义了一系列可用于查询指定数据的枚举。 ``` typedef enum GPUStringInfoFlag { /* flag for get gpu name */ GPU_NAME = 0x10600, /* flag for get gpu vendor */ GPU_VENDOR = 0x10601, /* flag for get gpu driver version */ GPU_DRIVER_VERSION = 0x10602, /* flag for get gpu internal name */ GPU_INTERNAL_NAME = 0x10603, /* flag for get gpu internal version */ GPU_VENDOR_INTERNAL_VERSION = 0x10604, /* flag for get gpu shader type */ GPU_SHADER_TYPE = 0x10605, /* flag for get gpu video storage type */ GPU_VIDEO_STORAGE_TYPE = 0x10606, /* flag for get gpu chipset publish date */ GPU_CHIPSET_PUBLISH_DATE = 0x10607, /* flag for get gpu driver publish date */ GPU_DRIVER_PUBLISH_DATE = 0x10608, /* flag for get gpu manufacturer id */ GPU_MANUFACTURER_ID = 0x10609, /* flag for get gpu chipset id */ GPU_CHIPSET_ID = 0x10610, /* flag for get gpu screen interface type */ GPU_SCREEN_INTERFACE_TYPE = 0x10611, /* flag for get gpu supported OpenGL version */ GPU_SUPPORT_OPENGL_VERSION = 0x10612, /* flag for get gpu supported Vulkan version */ GPU_SUPPORT_VULKAN_VERSION = 0x10613, /* flag for get gpu supported OpenCL version */ GPU_SUPPORT_OPENCL_VERSION = 0x10614, /* flag for get gpu bus interface type */ GPU_BUS_INTERFACE_TYPE = 0x10615 } GPUStringInfoFlag; typedef enum GPUIntInfoFlag { /* flag for get gpu generation process (Unit: nm) */ GPU_GENERATION_PROCESS = 0x10701, /* flag for get gpu power dissipation (Unit: W) */ GPU_POWER_DISSIPATION = 0x10702, /* flag for get gpu video storage (Unit: M) */ GPU_VIDEO_STORAGE = 0x10703, /* flag for get gpu raster count */ GPU_RASTER_COUNT = 0x10704, /* flag for get gpu shader count */ GPU_SHADER_COUNT = 0x10705, /* flag for get gpu memory bandwidth (Unit: MBps) */ GPU_MEMORY_BANDWIDTH = 0x10706, /* flag for get gpu memory interface bandwidth (Unit: bit) */ GPU_MEMORY_INTERFACE_BANDWIDTH = 0x10707, /* flag for get gpu pixel fillrate (Unit: MPixel/s) */ GPU_PIXEL_FILLRATE = 0x10708, /* flag for get gpu texture fillrate (Unit: MTexel/s) */ GPU_TEXTURE_FILLRATE = 0x10709, /* flag for get gpu max resolution width (Unit: Pixel) */ GPU_MAX_RESOLUTION_WIDTH = 0x10710, /* flag for get gpu max resolution height (Unit: Pixel) */ GPU_MAX_RESOLUTION_HEIGHT = 0x10711, /* flag for get gpu max screen count */ GPU_MAX_SCREEN_COUNT = 0x10712, /* flag for get gpu effective bandwidth (Unit: M) */ GPU_EFFECTIVE_BANDWIDTH = 0x10713, /* flag for get gpu normal accelerate frequency (Unit: MHz) */ GPU_NORM_ACCELERATE_FREQUENCY = 0x10714, /* flag for get gpu oc accelerate frequency (Unit: MHz) */ GPU_OC_ACCELERATE_FREQUENCY = 0x10715, /* flag for get gpu default frequency (Unit: MHz) */ GPU_DEFAULT_FREQUENCY = 0x10716, /* flag for get gpu max frequency (Unit: MHz) */ GPU_MAX_FREQUENCY = 0x10717, /* flag for get gpu bus bitwidth (Unit: MHz) */ GPU_BUS_BITWIDTH = 0x10718, /* flag for get gpu current frequency (Unit: MHz) */ GPU_CURRENT_FREQUENCY = 0x10719, /* flag for get gpu current clock frequency (Unit: MHz) */ GPU_CURRENT_CLOCK_FREQUENCY = 0x10720, /* flag for get gpu usage video storage percent */ GPU_USAGE_VIDEO_STROAGE_PERCENT = 0x10721, /* flag for get gpu used video storage size (Unit: M) */ GPU_USAGE_VIDEO_STROAGE_SIZE = 0x10722 } GPUIntInfoFlag; typedef enum GPUFloatInfoFlag { /* flag for get gpu temperature (Unit: Celsius) */ GPU_TEMPERATURE = 0x10801, /* flag for get gpu vddc (Unit: Volts) */ GPU_VDDC = 0x10802 } GPUFloatInfoFlag; ``` - 使用接口gpuEnumerateDevices枚举出所有可用的gpu设备,是否支持查询指定数据,可通过接口gpuIsSupport来查询。如果返回为GPU_TRUE,则可以使用gpuGetString、gpuGetInt或gpuGetFloat来查询具体值。 接口函数定义如下: ``` APIENTRY GPUError GPUAPIENTRY gpuEnumerateDevices(GPUuint* deviceCount, GPUDevice *devices); APIENTRY GPUboolean GPUAPIENTRY gpuIsSupport(GPUDevice device, GPUInfoFlag flag); APIENTRY GPUError GPUAPIENTRY gpuGetString(GPUDevice device, GPUStringInfoFlag flag, GPUuint valueSize, GPUbyte *value); APIENTRY GPUError GPUAPIENTRY gpuGetInt(GPUDevice device, GPUIntInfoFlag flag, GPUuint *value); APIENTRY GPUError GPUAPIENTRY gpuGetFloat(GPUDevice device, GPUFloatInfoFlag flag, GPUfloat *value); ``` - gpuEnumerateDevices 枚举当前可用的gpu设备。 - gpuIsSupport传入上述宏定义,返回值为GPU_TRUE或GPU_FALSE。如果为GPU_FALSE 则说明指定要查询的功能不被支持;返回为GPU_TRUE则指定要查询的功能已支持。 - gpuGetString用于获取字符串类型的查询结果。 - gpuGetInt用于获取整型的查询结果。 - gpuGetFloat用于获取浮点类型的查询结果。 ## gpu-info的内部现实 gpu-info的内部实现采用了OpenGL的扩展的方式,扩展的接口和gpu-info的接口是一致的,通过glXGetProcAddress函数查询OpenGL扩展的接口,gpu-info通过调用查询到的扩展接口现实具体的功能。 ## gpu-info-get - gpu-info-get是使用gpu-info获取gpu信息的样例。 - 代码在gpu-info-get文件夹下的main.cpp ``` #include #include #include #include "gpu.h" #include "nlohmann/json.hpp" using json = nlohmann::ordered_json; void printJsonFormat(GPUDevice device, json* output); int main() { GPUuint deviceCount = 0; gpuEnumerateDevices(&deviceCount, nullptr); if (deviceCount == 0) { std::cout << "no device" << std::endl; return -1; } std::vector devices(deviceCount); GPUError ret = gpuEnumerateDevices(&deviceCount, devices.data()); if (ret != GPU_NO_ERROR) { std::cout << "enumerate devices error" << std::endl; return -1; } json result = json::array(); for (auto& device : devices) { result.push_back({}); printJsonFormat(device, &result.back()); } std::cout << result.dump(4) << std::endl; return 0; } void getStringInfoToJson(GPUDevice device, GPUStringInfoFlag flag, const char *key, json* result) { if (!gpuIsSupport(device, flag)) (*result)[key] = ""; else { char value[512] = {0}; GPUError ret = gpuGetString(device, flag, sizeof(value), value); if (ret == GPU_NO_ERROR) { (*result)[key] = value; } else { (*result)[key] = ""; } } } void getIntInfoToJson(GPUDevice device, GPUIntInfoFlag flag, const char *key, json* result) { if (!gpuIsSupport(device, flag)) (*result)[key] = 0; else { GPUuint value = 0; GPUError ret = gpuGetInt(device, flag, &value); if (ret == GPU_NO_ERROR) { (*result)[key] = value; } else { (*result)[key] = 0; } } } void getFloatInfoToJson(GPUDevice device, GPUFloatInfoFlag flag, const char *key, json* result) { if (!gpuIsSupport(device, flag)) (*result)[key] = 0.0f; else { GPUfloat value; GPUError ret = gpuGetFloat(device, flag, &value); if (ret == GPU_NO_ERROR) { (*result)[key] = value; } else { (*result)[key] = 0.0f; } } } void printJsonFormat(GPUDevice device, json* output) { getStringInfoToJson(device, GPU_NAME, "name", output); getIntInfoToJson(device, GPU_GENERATION_PROCESS, "genProc", output); getStringInfoToJson(device, GPU_VENDOR, "vendor", output); getStringInfoToJson(device, GPU_DRIVER_VERSION, "drvVer", output); getStringInfoToJson(device, GPU_VENDOR_INTERNAL_VERSION, "intlVer", output); getIntInfoToJson(device, GPU_RASTER_COUNT, "rasterCount", output); getStringInfoToJson(device, GPU_INTERNAL_NAME, "intlName", output); getIntInfoToJson(device, GPU_MEMORY_BANDWIDTH, "memBw",output); getIntInfoToJson(device, GPU_MEMORY_INTERFACE_BANDWIDTH, "memIntfBw", output); getIntInfoToJson(device, GPU_PIXEL_FILLRATE, "pixFillrate", output); getIntInfoToJson(device, GPU_TEXTURE_FILLRATE, "texFillrate", output); getIntInfoToJson(device, GPU_POWER_DISSIPATION, "pd", output); getIntInfoToJson(device, GPU_VIDEO_STORAGE, "vs", output); getStringInfoToJson(device, GPU_VIDEO_STORAGE_TYPE, "vsType", output); getStringInfoToJson(device, GPU_CHIPSET_PUBLISH_DATE, "chipsetPubDate", output); getStringInfoToJson(device, GPU_DRIVER_PUBLISH_DATE, "drvPubDate", output); getStringInfoToJson(device, GPU_MANUFACTURER_ID, "manufacturerID", output); getStringInfoToJson(device, GPU_CHIPSET_ID, "chipsetID", output); getIntInfoToJson(device, GPU_MAX_RESOLUTION_WIDTH, "maxResWidth", output); getIntInfoToJson(device, GPU_MAX_RESOLUTION_HEIGHT, "maxResHeight", output); getIntInfoToJson(device, GPU_MAX_SCREEN_COUNT, "maxScreenCount", output); getStringInfoToJson(device, GPU_SCREEN_INTERFACE_TYPE, "screenIntfType", output); getIntInfoToJson(device, GPU_SHADER_COUNT, "shaderCount", output); getStringInfoToJson(device, GPU_SHADER_TYPE, "shaderType", output); getIntInfoToJson(device, GPU_EFFECTIVE_BANDWIDTH, "effectiveBw", output); getIntInfoToJson(device, GPU_NORM_ACCELERATE_FREQUENCY, "normAcFreq", output); getIntInfoToJson(device, GPU_OC_ACCELERATE_FREQUENCY, "ocAcFreq", output); getStringInfoToJson(device, GPU_SUPPORT_OPENGL_VERSION, "OpenGLVer", output); getStringInfoToJson(device, GPU_SUPPORT_OPENCL_VERSION, "OpenCLVer", output); getStringInfoToJson(device, GPU_SUPPORT_VULKAN_VERSION, "VulkanVer", output); getIntInfoToJson(device, GPU_DEFAULT_FREQUENCY, "defFreq", output); getIntInfoToJson(device, GPU_MAX_FREQUENCY, "maxfreq", output); getStringInfoToJson(device, GPU_BUS_INTERFACE_TYPE, "busIntfType", output); getIntInfoToJson(device, GPU_BUS_BITWIDTH, "busBw", output); auto& realTimeResult = (*output)["realTime"]; getFloatInfoToJson(device, GPU_TEMPERATURE, "temp", &realTimeResult); getIntInfoToJson(device, GPU_USAGE_VIDEO_STROAGE_PERCENT, "usageVSPer", &realTimeResult); getIntInfoToJson(device, GPU_USAGE_VIDEO_STROAGE_SIZE, "usedVSSize", &realTimeResult); getFloatInfoToJson(device, GPU_VDDC, "vddc", &realTimeResult); getIntInfoToJson(device, GPU_CURRENT_FREQUENCY, "curFreq", &realTimeResult); getIntInfoToJson(device, GPU_CURRENT_CLOCK_FREQUENCY, "curClockFreq", &realTimeResult); } ``` 包含头文件gpu.h,并链接gpu-info库。调用接口,查询GPU的信息。将查询到的信息以缩进的json格式输出。如果是不支持的类型信息,则字符串类型为””,整型为0,浮点型为0.0。 GPU-info-get 输出的结果如下: ``` [ { "name": "GenBu09", "genProc": 12, "vendor": "Sietium Inc", "drvVer": "1.2.4", "intlVer": "tiger.2023.3.6.7.2401", "rasterCount": 64, "intlName": "tiger", "memBw": 3200, "memIntfBw": 64, "pixFillrate": 6400, "texFillrate": 36864, "pd": 32, "vs": 4096, "vsType": "GDDR6", "chipsetPubDate": "2023-12-30", "drvPubDate": "2024-02-24", "manufacturerID": "sietium", "chipsetID": "GenBu2023&DEV_0028", "maxResWidth": 4096, "maxResHeight": 4096, "maxScreenCount": 6, "screenIntfType": "HDMI/VGA", "shaderCount": 128, "shaderType": "union shader unit", "effectiveBw": 0, "normAcFreq": 0, "ocAcFreq": 0, "OpenGLVer": "OpenGL(4.5)", "OpenCLVer": "OpenCL(1.2)", "VulkanVer": "", "defFreq": 0, "maxfreq": 0, "busIntfType": "AGP/PCI/PCI-Express", "busBw": 256, "realTime": { "temp": 40.380001068115234, "usageVSPer": 36, "usedVSSize": 368, "vddc": 48.619998931884766, "curFreq": 4000, "curClockFreq": 800 } }, { "name": "GenBu10", "genProc": 24, "vendor": "Sietium Inc", "drvVer": "1.2.10", "intlVer": "lion.2023.3.28.1.2201", "rasterCount": 128, "intlName": "lion", "memBw": 6400, "memIntfBw": 128, "pixFillrate": 6400, "texFillrate": 36864, "pd": 64, "vs": 8392, "vsType": "GDDR6", "chipsetPubDate": "2023-12-31", "drvPubDate": "2024-02-25", "manufacturerID": "sietium", "chipsetID": "GenBu2023&DEV_0029", "maxResWidth": 8392, "maxResHeight": 8392, "maxScreenCount": 12, "screenIntfType": "HDMI/VGA", "shaderCount": 128, "shaderType": "union shader unit", "effectiveBw": 0, "normAcFreq": 0, "ocAcFreq": 0, "OpenGLVer": "OpenGL(4.6)", "OpenCLVer": "OpenCL(1.2)", "VulkanVer": "", "defFreq": 0, "maxfreq": 0, "busIntfType": "AGP/PCI/PCI-Express", "busBw": 256, "realTime": { "temp": 80.37999725341797, "usageVSPer": 72, "usedVSSize": 728, "vddc": 88.62000274658203, "curFreq": 4000, "curClockFreq": 800 } } ] ```